Mar 21 12:39:59.904027 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 21 10:52:59 -00 2025 Mar 21 12:39:59.904051 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:39:59.904062 kernel: BIOS-provided physical RAM map: Mar 21 12:39:59.904069 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Mar 21 12:39:59.904075 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Mar 21 12:39:59.904085 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Mar 21 12:39:59.904092 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Mar 21 12:39:59.904099 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Mar 21 12:39:59.904106 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Mar 21 12:39:59.904121 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Mar 21 12:39:59.904129 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Mar 21 12:39:59.904138 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Mar 21 12:39:59.904146 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Mar 21 12:39:59.904155 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Mar 21 12:39:59.904165 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Mar 21 12:39:59.904173 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Mar 21 12:39:59.904180 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 21 12:39:59.904187 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 12:39:59.904194 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 12:39:59.904203 kernel: NX (Execute Disable) protection: active Mar 21 12:39:59.904211 kernel: APIC: Static calls initialized Mar 21 12:39:59.904218 kernel: e820: update [mem 0x9a187018-0x9a190c57] usable ==> usable Mar 21 12:39:59.904227 kernel: e820: update [mem 0x9a187018-0x9a190c57] usable ==> usable Mar 21 12:39:59.904235 kernel: e820: update [mem 0x9a14a018-0x9a186e57] usable ==> usable Mar 21 12:39:59.904244 kernel: e820: update [mem 0x9a14a018-0x9a186e57] usable ==> usable Mar 21 12:39:59.904251 kernel: extended physical RAM map: Mar 21 12:39:59.904258 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Mar 21 12:39:59.904265 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Mar 21 12:39:59.904272 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Mar 21 12:39:59.904279 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Mar 21 12:39:59.904289 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a14a017] usable Mar 21 12:39:59.904296 kernel: reserve setup_data: [mem 0x000000009a14a018-0x000000009a186e57] usable Mar 21 12:39:59.904303 kernel: reserve setup_data: [mem 0x000000009a186e58-0x000000009a187017] usable Mar 21 12:39:59.904319 kernel: reserve setup_data: [mem 0x000000009a187018-0x000000009a190c57] usable Mar 21 12:39:59.904356 kernel: reserve setup_data: [mem 0x000000009a190c58-0x000000009b8ecfff] usable Mar 21 12:39:59.904371 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Mar 21 12:39:59.904378 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Mar 21 12:39:59.904386 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Mar 21 12:39:59.904393 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Mar 21 12:39:59.904403 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Mar 21 12:39:59.904418 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Mar 21 12:39:59.904430 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Mar 21 12:39:59.904437 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Mar 21 12:39:59.904445 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 21 12:39:59.904452 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 12:39:59.904460 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 12:39:59.904470 kernel: efi: EFI v2.7 by EDK II Mar 21 12:39:59.904479 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 Mar 21 12:39:59.904488 kernel: random: crng init done Mar 21 12:39:59.904497 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Mar 21 12:39:59.904505 kernel: secureboot: Secure boot enabled Mar 21 12:39:59.904512 kernel: SMBIOS 2.8 present. Mar 21 12:39:59.904520 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 21 12:39:59.904527 kernel: Hypervisor detected: KVM Mar 21 12:39:59.904535 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 21 12:39:59.904542 kernel: kvm-clock: using sched offset of 3869155683 cycles Mar 21 12:39:59.904550 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 21 12:39:59.904560 kernel: tsc: Detected 2794.748 MHz processor Mar 21 12:39:59.904571 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 21 12:39:59.904580 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 21 12:39:59.904588 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Mar 21 12:39:59.904596 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 21 12:39:59.904603 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 21 12:39:59.904611 kernel: Using GB pages for direct mapping Mar 21 12:39:59.904619 kernel: ACPI: Early table checksum verification disabled Mar 21 12:39:59.904626 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Mar 21 12:39:59.904636 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 21 12:39:59.904644 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:39:59.904654 kernel: ACPI: DSDT 0x000000009BB7A000 002225 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:39:59.904664 kernel: ACPI: FACS 0x000000009BBDD000 000040 Mar 21 12:39:59.904672 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:39:59.904679 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:39:59.904687 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:39:59.904695 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:39:59.904703 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 21 12:39:59.904713 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Mar 21 12:39:59.904720 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c224] Mar 21 12:39:59.904728 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Mar 21 12:39:59.904738 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Mar 21 12:39:59.904747 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Mar 21 12:39:59.904755 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Mar 21 12:39:59.904763 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Mar 21 12:39:59.904771 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Mar 21 12:39:59.904778 kernel: No NUMA configuration found Mar 21 12:39:59.904788 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Mar 21 12:39:59.904796 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] Mar 21 12:39:59.904804 kernel: Zone ranges: Mar 21 12:39:59.904811 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 21 12:39:59.904821 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Mar 21 12:39:59.904831 kernel: Normal empty Mar 21 12:39:59.904838 kernel: Movable zone start for each node Mar 21 12:39:59.904846 kernel: Early memory node ranges Mar 21 12:39:59.904854 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Mar 21 12:39:59.904861 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Mar 21 12:39:59.904871 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Mar 21 12:39:59.904879 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Mar 21 12:39:59.904886 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Mar 21 12:39:59.904894 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Mar 21 12:39:59.904903 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 21 12:39:59.904913 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Mar 21 12:39:59.904921 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 21 12:39:59.904929 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 21 12:39:59.904936 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 21 12:39:59.904946 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Mar 21 12:39:59.904954 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 21 12:39:59.904961 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 21 12:39:59.904969 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 21 12:39:59.904977 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 21 12:39:59.904986 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 21 12:39:59.904996 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 21 12:39:59.905004 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 21 12:39:59.905012 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 21 12:39:59.905021 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 21 12:39:59.905029 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 21 12:39:59.905037 kernel: TSC deadline timer available Mar 21 12:39:59.905044 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 21 12:39:59.905052 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 21 12:39:59.905060 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 21 12:39:59.905077 kernel: kvm-guest: setup PV sched yield Mar 21 12:39:59.905088 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 21 12:39:59.905096 kernel: Booting paravirtualized kernel on KVM Mar 21 12:39:59.905105 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 21 12:39:59.905121 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 21 12:39:59.905130 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 21 12:39:59.905140 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 21 12:39:59.905148 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 21 12:39:59.905156 kernel: kvm-guest: PV spinlocks enabled Mar 21 12:39:59.905165 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 21 12:39:59.905176 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:39:59.905186 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 21 12:39:59.905194 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 21 12:39:59.905202 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 21 12:39:59.905212 kernel: Fallback order for Node 0: 0 Mar 21 12:39:59.905220 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 Mar 21 12:39:59.905228 kernel: Policy zone: DMA32 Mar 21 12:39:59.905236 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 21 12:39:59.905245 kernel: Memory: 2368304K/2552216K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43588K init, 1476K bss, 183656K reserved, 0K cma-reserved) Mar 21 12:39:59.905257 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 21 12:39:59.905267 kernel: ftrace: allocating 37985 entries in 149 pages Mar 21 12:39:59.905275 kernel: ftrace: allocated 149 pages with 4 groups Mar 21 12:39:59.905283 kernel: Dynamic Preempt: voluntary Mar 21 12:39:59.905291 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 21 12:39:59.905299 kernel: rcu: RCU event tracing is enabled. Mar 21 12:39:59.905308 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 21 12:39:59.905316 kernel: Trampoline variant of Tasks RCU enabled. Mar 21 12:39:59.905324 kernel: Rude variant of Tasks RCU enabled. Mar 21 12:39:59.905357 kernel: Tracing variant of Tasks RCU enabled. Mar 21 12:39:59.905367 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 21 12:39:59.905382 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 21 12:39:59.905398 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 21 12:39:59.905406 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 21 12:39:59.905414 kernel: Console: colour dummy device 80x25 Mar 21 12:39:59.905422 kernel: printk: console [ttyS0] enabled Mar 21 12:39:59.905433 kernel: ACPI: Core revision 20230628 Mar 21 12:39:59.905442 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 21 12:39:59.905453 kernel: APIC: Switch to symmetric I/O mode setup Mar 21 12:39:59.905461 kernel: x2apic enabled Mar 21 12:39:59.905469 kernel: APIC: Switched APIC routing to: physical x2apic Mar 21 12:39:59.905478 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 21 12:39:59.905486 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 21 12:39:59.905494 kernel: kvm-guest: setup PV IPIs Mar 21 12:39:59.905502 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 21 12:39:59.905511 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 21 12:39:59.905521 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 21 12:39:59.905532 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 21 12:39:59.905540 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 21 12:39:59.905548 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 21 12:39:59.905556 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 21 12:39:59.905564 kernel: Spectre V2 : Mitigation: Retpolines Mar 21 12:39:59.905572 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 21 12:39:59.905580 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 21 12:39:59.905588 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 21 12:39:59.905598 kernel: RETBleed: Mitigation: untrained return thunk Mar 21 12:39:59.905611 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 21 12:39:59.905619 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 21 12:39:59.905627 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 21 12:39:59.905636 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 21 12:39:59.905644 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 21 12:39:59.905652 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 21 12:39:59.905660 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 21 12:39:59.905668 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 21 12:39:59.905679 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 21 12:39:59.905689 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 21 12:39:59.905698 kernel: Freeing SMP alternatives memory: 32K Mar 21 12:39:59.905706 kernel: pid_max: default: 32768 minimum: 301 Mar 21 12:39:59.905714 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 21 12:39:59.905722 kernel: landlock: Up and running. Mar 21 12:39:59.905730 kernel: SELinux: Initializing. Mar 21 12:39:59.905738 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:39:59.905746 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:39:59.905756 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 21 12:39:59.905766 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:39:59.905776 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:39:59.905785 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:39:59.905793 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 21 12:39:59.905801 kernel: ... version: 0 Mar 21 12:39:59.905809 kernel: ... bit width: 48 Mar 21 12:39:59.905817 kernel: ... generic registers: 6 Mar 21 12:39:59.905825 kernel: ... value mask: 0000ffffffffffff Mar 21 12:39:59.905835 kernel: ... max period: 00007fffffffffff Mar 21 12:39:59.905843 kernel: ... fixed-purpose events: 0 Mar 21 12:39:59.905854 kernel: ... event mask: 000000000000003f Mar 21 12:39:59.905864 kernel: signal: max sigframe size: 1776 Mar 21 12:39:59.905872 kernel: rcu: Hierarchical SRCU implementation. Mar 21 12:39:59.905880 kernel: rcu: Max phase no-delay instances is 400. Mar 21 12:39:59.905888 kernel: smp: Bringing up secondary CPUs ... Mar 21 12:39:59.905896 kernel: smpboot: x86: Booting SMP configuration: Mar 21 12:39:59.905904 kernel: .... node #0, CPUs: #1 #2 #3 Mar 21 12:39:59.905914 kernel: smp: Brought up 1 node, 4 CPUs Mar 21 12:39:59.905922 kernel: smpboot: Max logical packages: 1 Mar 21 12:39:59.905930 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 21 12:39:59.905941 kernel: devtmpfs: initialized Mar 21 12:39:59.905950 kernel: x86/mm: Memory block size: 128MB Mar 21 12:39:59.905958 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Mar 21 12:39:59.905966 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Mar 21 12:39:59.905974 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 21 12:39:59.905983 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 21 12:39:59.905993 kernel: pinctrl core: initialized pinctrl subsystem Mar 21 12:39:59.906001 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 21 12:39:59.906009 kernel: audit: initializing netlink subsys (disabled) Mar 21 12:39:59.906019 kernel: audit: type=2000 audit(1742560799.073:1): state=initialized audit_enabled=0 res=1 Mar 21 12:39:59.906029 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 21 12:39:59.906038 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 21 12:39:59.906046 kernel: cpuidle: using governor menu Mar 21 12:39:59.906054 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 21 12:39:59.906062 kernel: dca service started, version 1.12.1 Mar 21 12:39:59.906073 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 21 12:39:59.906081 kernel: PCI: Using configuration type 1 for base access Mar 21 12:39:59.906089 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 21 12:39:59.906098 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 21 12:39:59.906108 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 21 12:39:59.906125 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 21 12:39:59.906133 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 21 12:39:59.906141 kernel: ACPI: Added _OSI(Module Device) Mar 21 12:39:59.906149 kernel: ACPI: Added _OSI(Processor Device) Mar 21 12:39:59.906159 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 21 12:39:59.906167 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 21 12:39:59.906175 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 21 12:39:59.906183 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 21 12:39:59.906191 kernel: ACPI: Interpreter enabled Mar 21 12:39:59.906201 kernel: ACPI: PM: (supports S0 S5) Mar 21 12:39:59.906212 kernel: ACPI: Using IOAPIC for interrupt routing Mar 21 12:39:59.906220 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 21 12:39:59.906228 kernel: PCI: Using E820 reservations for host bridge windows Mar 21 12:39:59.906238 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 21 12:39:59.906246 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 21 12:39:59.906469 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 21 12:39:59.906608 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 21 12:39:59.906740 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 21 12:39:59.906753 kernel: PCI host bridge to bus 0000:00 Mar 21 12:39:59.906886 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 21 12:39:59.907014 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 21 12:39:59.907145 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 21 12:39:59.907269 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 21 12:39:59.907405 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 21 12:39:59.907530 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 21 12:39:59.907650 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 21 12:39:59.907802 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 21 12:39:59.907958 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 21 12:39:59.908097 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 21 12:39:59.908241 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 21 12:39:59.908390 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 21 12:39:59.908525 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 21 12:39:59.908657 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 21 12:39:59.908808 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 21 12:39:59.908945 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 21 12:39:59.909077 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 21 12:39:59.909222 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 21 12:39:59.909392 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 21 12:39:59.909526 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 21 12:39:59.909658 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 21 12:39:59.909796 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 21 12:39:59.909941 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 21 12:39:59.910081 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 21 12:39:59.910232 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 21 12:39:59.910374 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 21 12:39:59.910501 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 21 12:39:59.910632 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 21 12:39:59.910762 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 21 12:39:59.910895 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 21 12:39:59.911019 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 21 12:39:59.911153 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 21 12:39:59.911286 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 21 12:39:59.911436 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 21 12:39:59.911451 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 21 12:39:59.911460 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 21 12:39:59.911468 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 21 12:39:59.911476 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 21 12:39:59.911484 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 21 12:39:59.911492 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 21 12:39:59.911500 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 21 12:39:59.911508 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 21 12:39:59.911516 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 21 12:39:59.911526 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 21 12:39:59.911534 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 21 12:39:59.911542 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 21 12:39:59.911550 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 21 12:39:59.911557 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 21 12:39:59.911565 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 21 12:39:59.911573 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 21 12:39:59.911581 kernel: iommu: Default domain type: Translated Mar 21 12:39:59.911589 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 21 12:39:59.911599 kernel: efivars: Registered efivars operations Mar 21 12:39:59.911607 kernel: PCI: Using ACPI for IRQ routing Mar 21 12:39:59.911615 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 21 12:39:59.911623 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Mar 21 12:39:59.911631 kernel: e820: reserve RAM buffer [mem 0x9a14a018-0x9bffffff] Mar 21 12:39:59.911639 kernel: e820: reserve RAM buffer [mem 0x9a187018-0x9bffffff] Mar 21 12:39:59.911646 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Mar 21 12:39:59.911654 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Mar 21 12:39:59.911778 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 21 12:39:59.911905 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 21 12:39:59.912028 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 21 12:39:59.912039 kernel: vgaarb: loaded Mar 21 12:39:59.912047 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 21 12:39:59.912055 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 21 12:39:59.912063 kernel: clocksource: Switched to clocksource kvm-clock Mar 21 12:39:59.912071 kernel: VFS: Disk quotas dquot_6.6.0 Mar 21 12:39:59.912079 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 21 12:39:59.912087 kernel: pnp: PnP ACPI init Mar 21 12:39:59.912243 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 21 12:39:59.912255 kernel: pnp: PnP ACPI: found 6 devices Mar 21 12:39:59.912263 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 21 12:39:59.912272 kernel: NET: Registered PF_INET protocol family Mar 21 12:39:59.912280 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 21 12:39:59.912288 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 21 12:39:59.912296 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 21 12:39:59.912304 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 21 12:39:59.912315 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 21 12:39:59.912324 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 21 12:39:59.912367 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:39:59.912375 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:39:59.912383 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 21 12:39:59.912391 kernel: NET: Registered PF_XDP protocol family Mar 21 12:39:59.912519 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 21 12:39:59.912642 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 21 12:39:59.912759 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 21 12:39:59.912872 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 21 12:39:59.912984 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 21 12:39:59.913095 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 21 12:39:59.913216 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 21 12:39:59.913341 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 21 12:39:59.913352 kernel: PCI: CLS 0 bytes, default 64 Mar 21 12:39:59.913360 kernel: Initialise system trusted keyrings Mar 21 12:39:59.913372 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 21 12:39:59.913380 kernel: Key type asymmetric registered Mar 21 12:39:59.913388 kernel: Asymmetric key parser 'x509' registered Mar 21 12:39:59.913396 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 21 12:39:59.913404 kernel: io scheduler mq-deadline registered Mar 21 12:39:59.913412 kernel: io scheduler kyber registered Mar 21 12:39:59.913420 kernel: io scheduler bfq registered Mar 21 12:39:59.913428 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 21 12:39:59.913452 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 21 12:39:59.913463 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 21 12:39:59.913473 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 21 12:39:59.913482 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 21 12:39:59.913490 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 21 12:39:59.913498 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 21 12:39:59.913507 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 21 12:39:59.913515 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 21 12:39:59.913646 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 21 12:39:59.913765 kernel: rtc_cmos 00:04: registered as rtc0 Mar 21 12:39:59.913779 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 21 12:39:59.913895 kernel: rtc_cmos 00:04: setting system clock to 2025-03-21T12:39:59 UTC (1742560799) Mar 21 12:39:59.914012 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 21 12:39:59.914023 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 21 12:39:59.914031 kernel: efifb: probing for efifb Mar 21 12:39:59.914039 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 21 12:39:59.914048 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 21 12:39:59.914056 kernel: efifb: scrolling: redraw Mar 21 12:39:59.914067 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 21 12:39:59.914076 kernel: Console: switching to colour frame buffer device 160x50 Mar 21 12:39:59.914084 kernel: fb0: EFI VGA frame buffer device Mar 21 12:39:59.914092 kernel: pstore: Using crash dump compression: deflate Mar 21 12:39:59.914100 kernel: pstore: Registered efi_pstore as persistent store backend Mar 21 12:39:59.914109 kernel: NET: Registered PF_INET6 protocol family Mar 21 12:39:59.914125 kernel: Segment Routing with IPv6 Mar 21 12:39:59.914133 kernel: In-situ OAM (IOAM) with IPv6 Mar 21 12:39:59.914142 kernel: NET: Registered PF_PACKET protocol family Mar 21 12:39:59.914153 kernel: Key type dns_resolver registered Mar 21 12:39:59.914163 kernel: IPI shorthand broadcast: enabled Mar 21 12:39:59.914172 kernel: sched_clock: Marking stable (590003014, 132431538)->(767067184, -44632632) Mar 21 12:39:59.914180 kernel: registered taskstats version 1 Mar 21 12:39:59.914188 kernel: Loading compiled-in X.509 certificates Mar 21 12:39:59.914197 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: d76f2258ffed89096a9428010e5ac0a0babcea9e' Mar 21 12:39:59.914207 kernel: Key type .fscrypt registered Mar 21 12:39:59.914216 kernel: Key type fscrypt-provisioning registered Mar 21 12:39:59.914224 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 21 12:39:59.914232 kernel: ima: Allocated hash algorithm: sha1 Mar 21 12:39:59.914240 kernel: ima: No architecture policies found Mar 21 12:39:59.914249 kernel: clk: Disabling unused clocks Mar 21 12:39:59.914257 kernel: Freeing unused kernel image (initmem) memory: 43588K Mar 21 12:39:59.914265 kernel: Write protecting the kernel read-only data: 40960k Mar 21 12:39:59.914274 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 21 12:39:59.914285 kernel: Run /init as init process Mar 21 12:39:59.914293 kernel: with arguments: Mar 21 12:39:59.914301 kernel: /init Mar 21 12:39:59.914309 kernel: with environment: Mar 21 12:39:59.914317 kernel: HOME=/ Mar 21 12:39:59.914325 kernel: TERM=linux Mar 21 12:39:59.914411 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 21 12:39:59.914420 systemd[1]: Successfully made /usr/ read-only. Mar 21 12:39:59.914432 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:39:59.914444 systemd[1]: Detected virtualization kvm. Mar 21 12:39:59.914453 systemd[1]: Detected architecture x86-64. Mar 21 12:39:59.914461 systemd[1]: Running in initrd. Mar 21 12:39:59.914470 systemd[1]: No hostname configured, using default hostname. Mar 21 12:39:59.914479 systemd[1]: Hostname set to . Mar 21 12:39:59.914488 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:39:59.914496 systemd[1]: Queued start job for default target initrd.target. Mar 21 12:39:59.914507 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:39:59.914516 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:39:59.914526 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 21 12:39:59.914534 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:39:59.914543 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 21 12:39:59.914553 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 21 12:39:59.914564 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 21 12:39:59.914575 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 21 12:39:59.914584 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:39:59.914593 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:39:59.914601 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:39:59.914610 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:39:59.914619 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:39:59.914628 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:39:59.914636 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:39:59.914647 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:39:59.914656 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 21 12:39:59.914665 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 21 12:39:59.914674 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:39:59.914683 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:39:59.914691 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:39:59.914700 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:39:59.914709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 21 12:39:59.914718 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:39:59.914729 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 21 12:39:59.914738 systemd[1]: Starting systemd-fsck-usr.service... Mar 21 12:39:59.914746 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:39:59.914755 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:39:59.914764 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:39:59.914772 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 21 12:39:59.914781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:39:59.914793 systemd[1]: Finished systemd-fsck-usr.service. Mar 21 12:39:59.914802 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:39:59.914834 systemd-journald[192]: Collecting audit messages is disabled. Mar 21 12:39:59.914857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:39:59.914866 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:39:59.914875 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:39:59.914884 systemd-journald[192]: Journal started Mar 21 12:39:59.914903 systemd-journald[192]: Runtime Journal (/run/log/journal/f7ce80f8700f498b8b948d6ced063714) is 6M, max 47.9M, 41.9M free. Mar 21 12:39:59.900776 systemd-modules-load[193]: Inserted module 'overlay' Mar 21 12:39:59.919031 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:39:59.921539 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:39:59.925452 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:39:59.930351 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 21 12:39:59.932815 systemd-modules-load[193]: Inserted module 'br_netfilter' Mar 21 12:39:59.933736 kernel: Bridge firewalling registered Mar 21 12:39:59.934718 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:39:59.936457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:39:59.937740 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:39:59.939958 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:39:59.942823 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:39:59.946731 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 21 12:39:59.961523 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:39:59.964094 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:39:59.972465 dracut-cmdline[228]: dracut-dracut-053 Mar 21 12:39:59.982187 dracut-cmdline[228]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:40:00.018380 systemd-resolved[231]: Positive Trust Anchors: Mar 21 12:40:00.018392 systemd-resolved[231]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:40:00.018422 systemd-resolved[231]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:40:00.020890 systemd-resolved[231]: Defaulting to hostname 'linux'. Mar 21 12:40:00.021924 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:40:00.029933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:40:00.060362 kernel: SCSI subsystem initialized Mar 21 12:40:00.070366 kernel: Loading iSCSI transport class v2.0-870. Mar 21 12:40:00.080365 kernel: iscsi: registered transport (tcp) Mar 21 12:40:00.101358 kernel: iscsi: registered transport (qla4xxx) Mar 21 12:40:00.101388 kernel: QLogic iSCSI HBA Driver Mar 21 12:40:00.150010 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 21 12:40:00.152672 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 21 12:40:00.188752 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 21 12:40:00.188781 kernel: device-mapper: uevent: version 1.0.3 Mar 21 12:40:00.189781 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 21 12:40:00.231357 kernel: raid6: avx2x4 gen() 30125 MB/s Mar 21 12:40:00.248347 kernel: raid6: avx2x2 gen() 30609 MB/s Mar 21 12:40:00.265431 kernel: raid6: avx2x1 gen() 25981 MB/s Mar 21 12:40:00.265454 kernel: raid6: using algorithm avx2x2 gen() 30609 MB/s Mar 21 12:40:00.283442 kernel: raid6: .... xor() 19912 MB/s, rmw enabled Mar 21 12:40:00.283464 kernel: raid6: using avx2x2 recovery algorithm Mar 21 12:40:00.303350 kernel: xor: automatically using best checksumming function avx Mar 21 12:40:00.448355 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 21 12:40:00.461704 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:40:00.464473 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:40:00.492296 systemd-udevd[414]: Using default interface naming scheme 'v255'. Mar 21 12:40:00.497632 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:40:00.502692 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 21 12:40:00.530346 dracut-pre-trigger[422]: rd.md=0: removing MD RAID activation Mar 21 12:40:00.562798 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:40:00.566397 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:40:00.640522 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:40:00.645443 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 21 12:40:00.667874 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 21 12:40:00.669025 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 21 12:40:00.689055 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 21 12:40:00.689214 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 21 12:40:00.689232 kernel: cryptd: max_cpu_qlen set to 1000 Mar 21 12:40:00.689243 kernel: GPT:9289727 != 19775487 Mar 21 12:40:00.689253 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 21 12:40:00.689264 kernel: GPT:9289727 != 19775487 Mar 21 12:40:00.689274 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 21 12:40:00.689284 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:40:00.671675 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:40:00.673199 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:40:00.674444 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:40:00.677440 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 21 12:40:00.695350 kernel: AVX2 version of gcm_enc/dec engaged. Mar 21 12:40:00.695373 kernel: AES CTR mode by8 optimization enabled Mar 21 12:40:00.707933 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:40:00.714347 kernel: libata version 3.00 loaded. Mar 21 12:40:00.718515 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:40:00.718683 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:40:00.726947 kernel: ahci 0000:00:1f.2: version 3.0 Mar 21 12:40:00.749863 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 21 12:40:00.749880 kernel: BTRFS: device fsid c99b4410-5d95-4377-8189-88a588aa2514 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (474) Mar 21 12:40:00.749891 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (462) Mar 21 12:40:00.749902 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 21 12:40:00.750061 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 21 12:40:00.750220 kernel: scsi host0: ahci Mar 21 12:40:00.750391 kernel: scsi host1: ahci Mar 21 12:40:00.750548 kernel: scsi host2: ahci Mar 21 12:40:00.750699 kernel: scsi host3: ahci Mar 21 12:40:00.750841 kernel: scsi host4: ahci Mar 21 12:40:00.750982 kernel: scsi host5: ahci Mar 21 12:40:00.751140 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 21 12:40:00.751153 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 21 12:40:00.751163 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 21 12:40:00.751174 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 21 12:40:00.751185 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 21 12:40:00.751195 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 21 12:40:00.723802 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:40:00.727296 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:40:00.728559 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:40:00.729840 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:40:00.732545 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:40:00.745216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 21 12:40:00.781101 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 21 12:40:00.790969 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 21 12:40:00.792237 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 21 12:40:00.802776 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:40:00.803643 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 21 12:40:00.803870 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:40:00.803918 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:40:00.806099 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:40:00.819817 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:40:00.829259 disk-uuid[560]: Primary Header is updated. Mar 21 12:40:00.829259 disk-uuid[560]: Secondary Entries is updated. Mar 21 12:40:00.829259 disk-uuid[560]: Secondary Header is updated. Mar 21 12:40:00.832665 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:40:00.836350 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:40:00.844719 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:40:00.850500 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:40:00.888155 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:40:01.062087 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 21 12:40:01.062145 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 21 12:40:01.062157 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 21 12:40:01.062168 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 21 12:40:01.063367 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 21 12:40:01.064356 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 21 12:40:01.064371 kernel: ata3.00: applying bridge limits Mar 21 12:40:01.065353 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 21 12:40:01.066353 kernel: ata3.00: configured for UDMA/100 Mar 21 12:40:01.066380 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 21 12:40:01.109367 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 21 12:40:01.122996 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 21 12:40:01.123014 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 21 12:40:01.838353 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:40:01.838583 disk-uuid[561]: The operation has completed successfully. Mar 21 12:40:01.866993 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 21 12:40:01.867124 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 21 12:40:01.903037 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 21 12:40:01.917914 sh[597]: Success Mar 21 12:40:01.930354 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 21 12:40:01.963077 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 21 12:40:01.966953 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 21 12:40:01.983600 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 21 12:40:01.990854 kernel: BTRFS info (device dm-0): first mount of filesystem c99b4410-5d95-4377-8189-88a588aa2514 Mar 21 12:40:01.990890 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:40:01.990902 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 21 12:40:01.992626 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 21 12:40:01.992640 kernel: BTRFS info (device dm-0): using free space tree Mar 21 12:40:01.997237 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 21 12:40:01.998766 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 21 12:40:01.999582 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 21 12:40:02.002402 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 21 12:40:02.032149 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:40:02.032184 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:40:02.032196 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:40:02.035362 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:40:02.039355 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:40:02.045083 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 21 12:40:02.047407 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 21 12:40:02.111771 ignition[692]: Ignition 2.20.0 Mar 21 12:40:02.112145 ignition[692]: Stage: fetch-offline Mar 21 12:40:02.112187 ignition[692]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:40:02.112198 ignition[692]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:40:02.112298 ignition[692]: parsed url from cmdline: "" Mar 21 12:40:02.112302 ignition[692]: no config URL provided Mar 21 12:40:02.112307 ignition[692]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 12:40:02.112319 ignition[692]: no config at "/usr/lib/ignition/user.ign" Mar 21 12:40:02.112360 ignition[692]: op(1): [started] loading QEMU firmware config module Mar 21 12:40:02.112366 ignition[692]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 21 12:40:02.119319 ignition[692]: op(1): [finished] loading QEMU firmware config module Mar 21 12:40:02.127375 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:40:02.130771 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:40:02.161410 ignition[692]: parsing config with SHA512: 04f0e1c662202565798f9b3947561a811deee010e0834fd7f7ae54d27a4cd77945e709af1d5c093444c955480cb6bff5f2c5091ab73c3463ea04117b21217d04 Mar 21 12:40:02.165217 unknown[692]: fetched base config from "system" Mar 21 12:40:02.165228 unknown[692]: fetched user config from "qemu" Mar 21 12:40:02.165573 ignition[692]: fetch-offline: fetch-offline passed Mar 21 12:40:02.165637 ignition[692]: Ignition finished successfully Mar 21 12:40:02.171204 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:40:02.174527 systemd-networkd[786]: lo: Link UP Mar 21 12:40:02.174537 systemd-networkd[786]: lo: Gained carrier Mar 21 12:40:02.177438 systemd-networkd[786]: Enumeration completed Mar 21 12:40:02.177535 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:40:02.178320 systemd[1]: Reached target network.target - Network. Mar 21 12:40:02.181078 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 21 12:40:02.181143 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:40:02.181147 systemd-networkd[786]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:40:02.182038 systemd-networkd[786]: eth0: Link UP Mar 21 12:40:02.182042 systemd-networkd[786]: eth0: Gained carrier Mar 21 12:40:02.182055 systemd-networkd[786]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:40:02.182525 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 21 12:40:02.202373 systemd-networkd[786]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:40:02.218281 ignition[790]: Ignition 2.20.0 Mar 21 12:40:02.218291 ignition[790]: Stage: kargs Mar 21 12:40:02.218447 ignition[790]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:40:02.218457 ignition[790]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:40:02.222056 ignition[790]: kargs: kargs passed Mar 21 12:40:02.222097 ignition[790]: Ignition finished successfully Mar 21 12:40:02.226414 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 21 12:40:02.229238 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 21 12:40:02.254666 ignition[799]: Ignition 2.20.0 Mar 21 12:40:02.254676 ignition[799]: Stage: disks Mar 21 12:40:02.254817 ignition[799]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:40:02.254828 ignition[799]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:40:02.258418 ignition[799]: disks: disks passed Mar 21 12:40:02.258464 ignition[799]: Ignition finished successfully Mar 21 12:40:02.261546 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 21 12:40:02.261787 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 21 12:40:02.264543 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 21 12:40:02.264757 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:40:02.265097 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:40:02.265589 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:40:02.272983 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 21 12:40:02.302347 systemd-fsck[809]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 21 12:40:02.308305 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 21 12:40:02.312294 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 21 12:40:02.409359 kernel: EXT4-fs (vda9): mounted filesystem c540419e-275b-4bd7-8ebd-24b19ec75c0b r/w with ordered data mode. Quota mode: none. Mar 21 12:40:02.410220 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 21 12:40:02.410855 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 21 12:40:02.413970 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:40:02.415602 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 21 12:40:02.416820 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 21 12:40:02.416861 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 21 12:40:02.416883 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:40:02.430398 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 21 12:40:02.431651 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 21 12:40:02.436873 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (817) Mar 21 12:40:02.436897 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:40:02.437767 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:40:02.437779 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:40:02.441379 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:40:02.442909 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:40:02.469702 initrd-setup-root[841]: cut: /sysroot/etc/passwd: No such file or directory Mar 21 12:40:02.473814 initrd-setup-root[848]: cut: /sysroot/etc/group: No such file or directory Mar 21 12:40:02.477523 initrd-setup-root[855]: cut: /sysroot/etc/shadow: No such file or directory Mar 21 12:40:02.481125 initrd-setup-root[862]: cut: /sysroot/etc/gshadow: No such file or directory Mar 21 12:40:02.566158 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 21 12:40:02.569237 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 21 12:40:02.571798 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 21 12:40:02.588351 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:40:02.600500 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 21 12:40:02.612233 ignition[931]: INFO : Ignition 2.20.0 Mar 21 12:40:02.612233 ignition[931]: INFO : Stage: mount Mar 21 12:40:02.613847 ignition[931]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:40:02.613847 ignition[931]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:40:02.616627 ignition[931]: INFO : mount: mount passed Mar 21 12:40:02.617482 ignition[931]: INFO : Ignition finished successfully Mar 21 12:40:02.620301 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 21 12:40:02.622269 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 21 12:40:02.990512 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 21 12:40:02.992145 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:40:03.013009 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (944) Mar 21 12:40:03.013065 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:40:03.013089 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:40:03.013888 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:40:03.017348 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:40:03.019002 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:40:03.049705 ignition[961]: INFO : Ignition 2.20.0 Mar 21 12:40:03.049705 ignition[961]: INFO : Stage: files Mar 21 12:40:03.051444 ignition[961]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:40:03.051444 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:40:03.054086 ignition[961]: DEBUG : files: compiled without relabeling support, skipping Mar 21 12:40:03.055380 ignition[961]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 21 12:40:03.055380 ignition[961]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 21 12:40:03.060070 ignition[961]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 21 12:40:03.061487 ignition[961]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 21 12:40:03.062782 ignition[961]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 21 12:40:03.061948 unknown[961]: wrote ssh authorized keys file for user: core Mar 21 12:40:03.066101 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 21 12:40:03.068044 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 21 12:40:03.112436 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 21 12:40:03.209220 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 21 12:40:03.209220 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:40:03.213192 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 21 12:40:03.550490 systemd-networkd[786]: eth0: Gained IPv6LL Mar 21 12:40:03.704028 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 21 12:40:04.065778 ignition[961]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 21 12:40:04.065778 ignition[961]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 21 12:40:04.070082 ignition[961]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 21 12:40:04.088508 ignition[961]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:40:04.093613 ignition[961]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:40:04.095405 ignition[961]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 21 12:40:04.095405 ignition[961]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 21 12:40:04.095405 ignition[961]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 21 12:40:04.095405 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:40:04.095405 ignition[961]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:40:04.095405 ignition[961]: INFO : files: files passed Mar 21 12:40:04.095405 ignition[961]: INFO : Ignition finished successfully Mar 21 12:40:04.097148 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 21 12:40:04.100703 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 21 12:40:04.103084 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 21 12:40:04.119640 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 21 12:40:04.119791 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 21 12:40:04.123942 initrd-setup-root-after-ignition[990]: grep: /sysroot/oem/oem-release: No such file or directory Mar 21 12:40:04.125637 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:40:04.125637 initrd-setup-root-after-ignition[993]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:40:04.131615 initrd-setup-root-after-ignition[997]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:40:04.126832 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:40:04.129442 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 21 12:40:04.132599 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 21 12:40:04.199053 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 21 12:40:04.199201 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 21 12:40:04.201727 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 21 12:40:04.204054 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 21 12:40:04.205241 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 21 12:40:04.206161 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 21 12:40:04.231075 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:40:04.234728 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 21 12:40:04.255175 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:40:04.257494 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:40:04.258773 systemd[1]: Stopped target timers.target - Timer Units. Mar 21 12:40:04.260696 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 21 12:40:04.260806 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:40:04.263114 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 21 12:40:04.264656 systemd[1]: Stopped target basic.target - Basic System. Mar 21 12:40:04.266661 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 21 12:40:04.268692 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:40:04.270672 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 21 12:40:04.272804 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 21 12:40:04.274897 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:40:04.277165 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 21 12:40:04.279133 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 21 12:40:04.281291 systemd[1]: Stopped target swap.target - Swaps. Mar 21 12:40:04.283239 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 21 12:40:04.283410 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:40:04.285522 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:40:04.286956 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:40:04.289049 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 21 12:40:04.289170 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:40:04.291280 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 21 12:40:04.291410 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 21 12:40:04.293781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 21 12:40:04.293900 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:40:04.295851 systemd[1]: Stopped target paths.target - Path Units. Mar 21 12:40:04.297574 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 21 12:40:04.301432 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:40:04.303306 systemd[1]: Stopped target slices.target - Slice Units. Mar 21 12:40:04.305248 systemd[1]: Stopped target sockets.target - Socket Units. Mar 21 12:40:04.307040 systemd[1]: iscsid.socket: Deactivated successfully. Mar 21 12:40:04.307140 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:40:04.309084 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 21 12:40:04.309179 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:40:04.311544 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 21 12:40:04.311669 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:40:04.313600 systemd[1]: ignition-files.service: Deactivated successfully. Mar 21 12:40:04.313709 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 21 12:40:04.316279 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 21 12:40:04.318768 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 21 12:40:04.319757 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 21 12:40:04.319871 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:40:04.321929 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 21 12:40:04.322047 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:40:04.328062 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 21 12:40:04.328163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 21 12:40:04.337130 ignition[1017]: INFO : Ignition 2.20.0 Mar 21 12:40:04.337130 ignition[1017]: INFO : Stage: umount Mar 21 12:40:04.337130 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:40:04.337130 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:40:04.337130 ignition[1017]: INFO : umount: umount passed Mar 21 12:40:04.337130 ignition[1017]: INFO : Ignition finished successfully Mar 21 12:40:04.338824 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 21 12:40:04.338948 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 21 12:40:04.341564 systemd[1]: Stopped target network.target - Network. Mar 21 12:40:04.342918 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 21 12:40:04.343004 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 21 12:40:04.344696 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 21 12:40:04.344745 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 21 12:40:04.346825 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 21 12:40:04.346874 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 21 12:40:04.348837 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 21 12:40:04.348886 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 21 12:40:04.351046 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 21 12:40:04.352886 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 21 12:40:04.355964 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 21 12:40:04.359876 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 21 12:40:04.360008 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 21 12:40:04.364834 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 21 12:40:04.365121 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 21 12:40:04.365239 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 21 12:40:04.368409 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 21 12:40:04.369156 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 21 12:40:04.369222 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:40:04.371423 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 21 12:40:04.372752 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 21 12:40:04.372803 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:40:04.374950 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:40:04.375002 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:40:04.377048 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 21 12:40:04.377095 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 21 12:40:04.379235 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 21 12:40:04.379278 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:40:04.381443 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:40:04.384932 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 12:40:04.385004 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:40:04.392712 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 21 12:40:04.392833 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 21 12:40:04.403186 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 21 12:40:04.403389 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:40:04.405660 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 21 12:40:04.405709 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 21 12:40:04.407699 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 21 12:40:04.407738 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:40:04.409740 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 21 12:40:04.409794 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:40:04.411905 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 21 12:40:04.411961 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 21 12:40:04.413874 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:40:04.413924 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:40:04.416932 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 21 12:40:04.418003 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 21 12:40:04.418056 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:40:04.420312 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 21 12:40:04.420383 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:40:04.422377 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 21 12:40:04.422426 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:40:04.424530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:40:04.424577 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:40:04.427693 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 21 12:40:04.427757 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:40:04.433754 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 21 12:40:04.433876 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 21 12:40:04.511094 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 21 12:40:04.511233 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 21 12:40:04.513284 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 21 12:40:04.514965 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 21 12:40:04.515034 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 21 12:40:04.517906 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 21 12:40:04.540318 systemd[1]: Switching root. Mar 21 12:40:04.572555 systemd-journald[192]: Journal stopped Mar 21 12:40:05.778113 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Mar 21 12:40:05.778188 kernel: SELinux: policy capability network_peer_controls=1 Mar 21 12:40:05.778202 kernel: SELinux: policy capability open_perms=1 Mar 21 12:40:05.778214 kernel: SELinux: policy capability extended_socket_class=1 Mar 21 12:40:05.778226 kernel: SELinux: policy capability always_check_network=0 Mar 21 12:40:05.778238 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 21 12:40:05.778249 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 21 12:40:05.778261 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 21 12:40:05.778274 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 21 12:40:05.778288 kernel: audit: type=1403 audit(1742560804.985:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 21 12:40:05.778306 systemd[1]: Successfully loaded SELinux policy in 40.312ms. Mar 21 12:40:05.778345 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.634ms. Mar 21 12:40:05.778360 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:40:05.778373 systemd[1]: Detected virtualization kvm. Mar 21 12:40:05.778386 systemd[1]: Detected architecture x86-64. Mar 21 12:40:05.778399 systemd[1]: Detected first boot. Mar 21 12:40:05.778411 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:40:05.778424 zram_generator::config[1063]: No configuration found. Mar 21 12:40:05.778441 kernel: Guest personality initialized and is inactive Mar 21 12:40:05.778453 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 21 12:40:05.778467 kernel: Initialized host personality Mar 21 12:40:05.778479 kernel: NET: Registered PF_VSOCK protocol family Mar 21 12:40:05.778491 systemd[1]: Populated /etc with preset unit settings. Mar 21 12:40:05.778505 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 21 12:40:05.778518 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 21 12:40:05.778530 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 21 12:40:05.778546 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 21 12:40:05.778559 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 21 12:40:05.778572 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 21 12:40:05.778585 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 21 12:40:05.778597 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 21 12:40:05.778610 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 21 12:40:05.778628 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 21 12:40:05.778640 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 21 12:40:05.778656 systemd[1]: Created slice user.slice - User and Session Slice. Mar 21 12:40:05.778669 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:40:05.778682 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:40:05.778695 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 21 12:40:05.778707 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 21 12:40:05.778720 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 21 12:40:05.778733 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:40:05.778748 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 21 12:40:05.778763 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:40:05.778776 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 21 12:40:05.778788 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 21 12:40:05.778801 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 21 12:40:05.778814 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 21 12:40:05.778826 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:40:05.778839 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:40:05.778852 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:40:05.778864 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:40:05.778880 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 21 12:40:05.778893 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 21 12:40:05.778905 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 21 12:40:05.778918 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:40:05.778931 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:40:05.778944 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:40:05.778964 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 21 12:40:05.778977 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 21 12:40:05.778990 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 21 12:40:05.779006 systemd[1]: Mounting media.mount - External Media Directory... Mar 21 12:40:05.779019 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:05.779033 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 21 12:40:05.779046 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 21 12:40:05.779058 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 21 12:40:05.779072 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 21 12:40:05.779085 systemd[1]: Reached target machines.target - Containers. Mar 21 12:40:05.779098 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 21 12:40:05.779113 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:40:05.779126 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:40:05.779139 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 21 12:40:05.779152 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:40:05.779165 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:40:05.779177 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:40:05.779190 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 21 12:40:05.779209 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:40:05.779222 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 21 12:40:05.779237 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 21 12:40:05.779251 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 21 12:40:05.779263 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 21 12:40:05.779276 systemd[1]: Stopped systemd-fsck-usr.service. Mar 21 12:40:05.779290 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:40:05.779304 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:40:05.779318 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:40:05.779343 kernel: loop: module loaded Mar 21 12:40:05.779356 kernel: fuse: init (API version 7.39) Mar 21 12:40:05.779372 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 21 12:40:05.779384 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 21 12:40:05.779398 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 21 12:40:05.779410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:40:05.779426 systemd[1]: verity-setup.service: Deactivated successfully. Mar 21 12:40:05.779439 systemd[1]: Stopped verity-setup.service. Mar 21 12:40:05.779452 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:05.779482 systemd-journald[1134]: Collecting audit messages is disabled. Mar 21 12:40:05.779506 systemd-journald[1134]: Journal started Mar 21 12:40:05.779532 systemd-journald[1134]: Runtime Journal (/run/log/journal/f7ce80f8700f498b8b948d6ced063714) is 6M, max 47.9M, 41.9M free. Mar 21 12:40:05.550663 systemd[1]: Queued start job for default target multi-user.target. Mar 21 12:40:05.564149 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 21 12:40:05.564596 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 21 12:40:05.781359 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 21 12:40:05.785346 kernel: ACPI: bus type drm_connector registered Mar 21 12:40:05.792472 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:40:05.793357 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 21 12:40:05.795597 systemd[1]: Mounted media.mount - External Media Directory. Mar 21 12:40:05.797567 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 21 12:40:05.798791 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 21 12:40:05.800006 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 21 12:40:05.801280 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 21 12:40:05.802779 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:40:05.804304 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 21 12:40:05.804533 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 21 12:40:05.806097 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:40:05.806343 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:40:05.807785 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:40:05.808027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:40:05.809402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:40:05.809731 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:40:05.811256 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 21 12:40:05.811497 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 21 12:40:05.812914 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:40:05.813142 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:40:05.814591 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:40:05.816021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 21 12:40:05.817585 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 21 12:40:05.821726 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 21 12:40:05.834479 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 21 12:40:05.837684 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 21 12:40:05.840111 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 21 12:40:05.841449 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 21 12:40:05.841539 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:40:05.843688 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 21 12:40:05.852453 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 21 12:40:05.854802 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 21 12:40:05.856098 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:40:05.857196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 21 12:40:05.862294 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 21 12:40:05.863815 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:40:05.871455 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 21 12:40:05.872715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:40:05.877461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:40:05.884179 systemd-journald[1134]: Time spent on flushing to /var/log/journal/f7ce80f8700f498b8b948d6ced063714 is 15.701ms for 1028 entries. Mar 21 12:40:05.884179 systemd-journald[1134]: System Journal (/var/log/journal/f7ce80f8700f498b8b948d6ced063714) is 8M, max 195.6M, 187.6M free. Mar 21 12:40:05.913377 systemd-journald[1134]: Received client request to flush runtime journal. Mar 21 12:40:05.913413 kernel: loop0: detected capacity change from 0 to 151640 Mar 21 12:40:05.884864 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 21 12:40:05.888668 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:40:05.895376 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:40:05.896865 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 21 12:40:05.898729 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 21 12:40:05.907663 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 21 12:40:05.909383 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 21 12:40:05.911139 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:40:05.916408 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 21 12:40:05.921066 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 21 12:40:05.928364 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 21 12:40:05.928993 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 21 12:40:05.933035 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 21 12:40:05.939255 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 21 12:40:05.939270 systemd-tmpfiles[1184]: ACLs are not supported, ignoring. Mar 21 12:40:05.947625 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:40:05.951778 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 21 12:40:05.953226 udevadm[1199]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Mar 21 12:40:05.961381 kernel: loop1: detected capacity change from 0 to 109808 Mar 21 12:40:05.965698 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 21 12:40:05.991503 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 21 12:40:05.996278 kernel: loop2: detected capacity change from 0 to 210664 Mar 21 12:40:05.996553 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:40:06.022640 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 21 12:40:06.022661 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. Mar 21 12:40:06.028191 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:40:06.034356 kernel: loop3: detected capacity change from 0 to 151640 Mar 21 12:40:06.049368 kernel: loop4: detected capacity change from 0 to 109808 Mar 21 12:40:06.058360 kernel: loop5: detected capacity change from 0 to 210664 Mar 21 12:40:06.067284 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 21 12:40:06.067913 (sd-merge)[1210]: Merged extensions into '/usr'. Mar 21 12:40:06.072706 systemd[1]: Reload requested from client PID 1183 ('systemd-sysext') (unit systemd-sysext.service)... Mar 21 12:40:06.072723 systemd[1]: Reloading... Mar 21 12:40:06.145383 zram_generator::config[1238]: No configuration found. Mar 21 12:40:06.211209 ldconfig[1178]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 21 12:40:06.275054 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:40:06.339683 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 21 12:40:06.339787 systemd[1]: Reloading finished in 266 ms. Mar 21 12:40:06.357255 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 21 12:40:06.358805 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 21 12:40:06.375952 systemd[1]: Starting ensure-sysext.service... Mar 21 12:40:06.378179 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:40:06.406192 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 21 12:40:06.406505 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 21 12:40:06.407441 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 21 12:40:06.407715 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Mar 21 12:40:06.407795 systemd-tmpfiles[1276]: ACLs are not supported, ignoring. Mar 21 12:40:06.411919 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:40:06.411941 systemd-tmpfiles[1276]: Skipping /boot Mar 21 12:40:06.413450 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Mar 21 12:40:06.413468 systemd[1]: Reloading... Mar 21 12:40:06.424789 systemd-tmpfiles[1276]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:40:06.424803 systemd-tmpfiles[1276]: Skipping /boot Mar 21 12:40:06.471352 zram_generator::config[1308]: No configuration found. Mar 21 12:40:06.578823 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:40:06.643119 systemd[1]: Reloading finished in 229 ms. Mar 21 12:40:06.658113 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 21 12:40:06.674694 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:40:06.685459 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:40:06.688512 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 21 12:40:06.700451 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 21 12:40:06.704452 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:40:06.707781 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:40:06.712394 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 21 12:40:06.717391 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:06.717575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:40:06.720439 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:40:06.727067 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:40:06.733653 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:40:06.734896 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:40:06.735014 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:40:06.739586 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 21 12:40:06.740840 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:06.744994 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 21 12:40:06.747036 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:40:06.747245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:40:06.748935 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:40:06.749146 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:40:06.753221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:40:06.753533 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:40:06.754021 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Mar 21 12:40:06.759544 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:40:06.759786 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:40:06.762055 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 21 12:40:06.766133 augenrules[1377]: No rules Mar 21 12:40:06.766467 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:06.766685 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:40:06.773537 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:40:06.778307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:40:06.785597 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:40:06.786741 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:40:06.786852 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:40:06.786952 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:06.788222 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:40:06.790986 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:40:06.791274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:40:06.794400 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 21 12:40:06.796453 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 21 12:40:06.798258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:40:06.798492 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:40:06.800014 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 21 12:40:06.801841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 21 12:40:06.803827 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:40:06.804250 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:40:06.806005 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:40:06.806246 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:40:06.828323 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:06.830484 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:40:06.831725 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:40:06.836733 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:40:06.846641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:40:06.849678 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:40:06.855675 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:40:06.858352 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:40:06.858468 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:40:06.860515 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:40:06.861628 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 12:40:06.861768 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:40:06.863098 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:40:06.863320 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:40:06.871849 systemd[1]: Finished ensure-sysext.service. Mar 21 12:40:06.875399 augenrules[1419]: /sbin/augenrules: No change Mar 21 12:40:06.877853 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:40:06.879415 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:40:06.883419 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:40:06.885524 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:40:06.895826 augenrules[1451]: No rules Mar 21 12:40:06.898565 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:40:06.898794 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:40:06.900666 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:40:06.900942 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:40:06.906418 systemd-resolved[1348]: Positive Trust Anchors: Mar 21 12:40:06.906730 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:40:06.906803 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:40:06.909386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1405) Mar 21 12:40:06.916930 systemd-resolved[1348]: Defaulting to hostname 'linux'. Mar 21 12:40:06.921469 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:40:06.929968 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 21 12:40:06.930007 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:40:06.931994 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:40:06.932074 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:40:06.933427 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 21 12:40:06.934364 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 21 12:40:06.939263 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:40:06.941354 kernel: ACPI: button: Power Button [PWRF] Mar 21 12:40:06.943294 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 21 12:40:06.960692 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 21 12:40:06.967587 systemd-networkd[1436]: lo: Link UP Mar 21 12:40:06.967601 systemd-networkd[1436]: lo: Gained carrier Mar 21 12:40:06.969442 systemd-networkd[1436]: Enumeration completed Mar 21 12:40:06.969541 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:40:06.970918 systemd[1]: Reached target network.target - Network. Mar 21 12:40:06.971298 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:40:06.971307 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:40:06.972515 systemd-networkd[1436]: eth0: Link UP Mar 21 12:40:06.972526 systemd-networkd[1436]: eth0: Gained carrier Mar 21 12:40:06.972539 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:40:06.973361 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 21 12:40:06.977156 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 21 12:40:06.981620 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 21 12:40:06.981940 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 21 12:40:06.982122 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 21 12:40:06.980020 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 21 12:40:06.985500 systemd-networkd[1436]: eth0: DHCPv4 address 10.0.0.131/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:40:06.989390 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Mar 21 12:40:07.006285 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 21 12:40:07.024461 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 21 12:40:07.026052 systemd[1]: Reached target time-set.target - System Time Set. Mar 21 12:40:07.029106 systemd-timesyncd[1461]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 21 12:40:07.029148 systemd-timesyncd[1461]: Initial clock synchronization to Fri 2025-03-21 12:40:07.227127 UTC. Mar 21 12:40:07.044553 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:40:07.089357 kernel: mousedev: PS/2 mouse device common for all mice Mar 21 12:40:07.091306 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:40:07.091607 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:40:07.095017 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:40:07.101352 kernel: kvm_amd: TSC scaling supported Mar 21 12:40:07.101387 kernel: kvm_amd: Nested Virtualization enabled Mar 21 12:40:07.101400 kernel: kvm_amd: Nested Paging enabled Mar 21 12:40:07.101412 kernel: kvm_amd: LBR virtualization supported Mar 21 12:40:07.101442 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 21 12:40:07.101454 kernel: kvm_amd: Virtual GIF supported Mar 21 12:40:07.122478 kernel: EDAC MC: Ver: 3.0.0 Mar 21 12:40:07.153501 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 21 12:40:07.156567 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 21 12:40:07.158108 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:40:07.183210 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:40:07.213606 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 21 12:40:07.215143 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:40:07.216305 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:40:07.217526 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 21 12:40:07.218817 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 21 12:40:07.220291 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 21 12:40:07.221501 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 21 12:40:07.222808 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 21 12:40:07.224090 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 21 12:40:07.224122 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:40:07.225093 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:40:07.226803 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 21 12:40:07.229703 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 21 12:40:07.233090 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 21 12:40:07.234546 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 21 12:40:07.235851 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 21 12:40:07.239633 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 21 12:40:07.241271 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 21 12:40:07.243929 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 21 12:40:07.245590 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 21 12:40:07.246800 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:40:07.247799 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:40:07.248815 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:40:07.248846 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:40:07.249836 systemd[1]: Starting containerd.service - containerd container runtime... Mar 21 12:40:07.252037 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 21 12:40:07.253439 lvm[1487]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:40:07.254080 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 21 12:40:07.258786 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 21 12:40:07.260538 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 21 12:40:07.264123 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 21 12:40:07.266636 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 21 12:40:07.269549 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 21 12:40:07.270830 jq[1490]: false Mar 21 12:40:07.272608 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 21 12:40:07.276375 dbus-daemon[1489]: [system] SELinux support is enabled Mar 21 12:40:07.281762 extend-filesystems[1491]: Found loop3 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found loop4 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found loop5 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found sr0 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda1 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda2 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda3 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found usr Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda4 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda6 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda7 Mar 21 12:40:07.281762 extend-filesystems[1491]: Found vda9 Mar 21 12:40:07.281762 extend-filesystems[1491]: Checking size of /dev/vda9 Mar 21 12:40:07.281851 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 21 12:40:07.287556 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 21 12:40:07.288694 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 21 12:40:07.289564 systemd[1]: Starting update-engine.service - Update Engine... Mar 21 12:40:07.295784 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 21 12:40:07.301802 extend-filesystems[1491]: Resized partition /dev/vda9 Mar 21 12:40:07.302750 jq[1503]: true Mar 21 12:40:07.303183 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 21 12:40:07.306529 extend-filesystems[1507]: resize2fs 1.47.2 (1-Jan-2025) Mar 21 12:40:07.309086 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 21 12:40:07.310425 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 21 12:40:07.312865 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 21 12:40:07.313128 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 21 12:40:07.315394 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1412) Mar 21 12:40:07.314924 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 21 12:40:07.315165 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 21 12:40:07.319838 update_engine[1502]: I20250321 12:40:07.319772 1502 main.cc:92] Flatcar Update Engine starting Mar 21 12:40:07.332374 jq[1515]: true Mar 21 12:40:07.334346 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 21 12:40:07.348721 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 21 12:40:07.357077 update_engine[1502]: I20250321 12:40:07.340117 1502 update_check_scheduler.cc:74] Next update check in 3m49s Mar 21 12:40:07.349071 systemd[1]: motdgen.service: Deactivated successfully. Mar 21 12:40:07.350125 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 21 12:40:07.359351 extend-filesystems[1507]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 21 12:40:07.359351 extend-filesystems[1507]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 21 12:40:07.359351 extend-filesystems[1507]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 21 12:40:07.365763 extend-filesystems[1491]: Resized filesystem in /dev/vda9 Mar 21 12:40:07.362590 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 21 12:40:07.362870 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 21 12:40:07.379474 tar[1512]: linux-amd64/helm Mar 21 12:40:07.385265 systemd[1]: Started update-engine.service - Update Engine. Mar 21 12:40:07.386679 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 21 12:40:07.386710 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 21 12:40:07.387986 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 21 12:40:07.388009 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 21 12:40:07.390631 systemd-logind[1497]: Watching system buttons on /dev/input/event1 (Power Button) Mar 21 12:40:07.390665 systemd-logind[1497]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 21 12:40:07.391135 systemd-logind[1497]: New seat seat0. Mar 21 12:40:07.393433 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 21 12:40:07.394787 systemd[1]: Started systemd-logind.service - User Login Management. Mar 21 12:40:07.411758 bash[1544]: Updated "/home/core/.ssh/authorized_keys" Mar 21 12:40:07.412851 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 21 12:40:07.415880 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 21 12:40:07.444566 locksmithd[1545]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 21 12:40:07.534886 containerd[1520]: time="2025-03-21T12:40:07Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 21 12:40:07.535673 containerd[1520]: time="2025-03-21T12:40:07.535624004Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 21 12:40:07.544685 containerd[1520]: time="2025-03-21T12:40:07.544650241Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.013µs" Mar 21 12:40:07.544685 containerd[1520]: time="2025-03-21T12:40:07.544672783Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 21 12:40:07.544761 containerd[1520]: time="2025-03-21T12:40:07.544691639Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 21 12:40:07.544871 containerd[1520]: time="2025-03-21T12:40:07.544844926Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 21 12:40:07.544871 containerd[1520]: time="2025-03-21T12:40:07.544866687Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 21 12:40:07.544920 containerd[1520]: time="2025-03-21T12:40:07.544890762Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:40:07.544990 containerd[1520]: time="2025-03-21T12:40:07.544962417Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:40:07.544990 containerd[1520]: time="2025-03-21T12:40:07.544978156Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545230 containerd[1520]: time="2025-03-21T12:40:07.545200804Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545230 containerd[1520]: time="2025-03-21T12:40:07.545219799Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545272 containerd[1520]: time="2025-03-21T12:40:07.545230529Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545272 containerd[1520]: time="2025-03-21T12:40:07.545240147Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545383 containerd[1520]: time="2025-03-21T12:40:07.545364120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545618 containerd[1520]: time="2025-03-21T12:40:07.545590865Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545650 containerd[1520]: time="2025-03-21T12:40:07.545626221Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:40:07.545650 containerd[1520]: time="2025-03-21T12:40:07.545636471Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 21 12:40:07.545692 containerd[1520]: time="2025-03-21T12:40:07.545672568Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 21 12:40:07.545980 containerd[1520]: time="2025-03-21T12:40:07.545958414Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 21 12:40:07.546046 containerd[1520]: time="2025-03-21T12:40:07.546028746Z" level=info msg="metadata content store policy set" policy=shared Mar 21 12:40:07.551866 containerd[1520]: time="2025-03-21T12:40:07.551834775Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 21 12:40:07.551917 containerd[1520]: time="2025-03-21T12:40:07.551874850Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 21 12:40:07.551917 containerd[1520]: time="2025-03-21T12:40:07.551902993Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 21 12:40:07.551917 containerd[1520]: time="2025-03-21T12:40:07.551917099Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 21 12:40:07.551974 containerd[1520]: time="2025-03-21T12:40:07.551928761Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 21 12:40:07.551974 containerd[1520]: time="2025-03-21T12:40:07.551940313Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 21 12:40:07.551974 containerd[1520]: time="2025-03-21T12:40:07.551950822Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 21 12:40:07.551974 containerd[1520]: time="2025-03-21T12:40:07.551968285Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 21 12:40:07.552048 containerd[1520]: time="2025-03-21T12:40:07.551978655Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 21 12:40:07.552048 containerd[1520]: time="2025-03-21T12:40:07.551989495Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 21 12:40:07.552048 containerd[1520]: time="2025-03-21T12:40:07.551998602Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 21 12:40:07.552048 containerd[1520]: time="2025-03-21T12:40:07.552008871Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 21 12:40:07.552134 containerd[1520]: time="2025-03-21T12:40:07.552111233Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 21 12:40:07.552156 containerd[1520]: time="2025-03-21T12:40:07.552133144Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 21 12:40:07.552156 containerd[1520]: time="2025-03-21T12:40:07.552146419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 21 12:40:07.552198 containerd[1520]: time="2025-03-21T12:40:07.552162620Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 21 12:40:07.552198 containerd[1520]: time="2025-03-21T12:40:07.552174141Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 21 12:40:07.552198 containerd[1520]: time="2025-03-21T12:40:07.552184821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 21 12:40:07.552256 containerd[1520]: time="2025-03-21T12:40:07.552199579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 21 12:40:07.552256 containerd[1520]: time="2025-03-21T12:40:07.552210650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 21 12:40:07.552256 containerd[1520]: time="2025-03-21T12:40:07.552221280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 21 12:40:07.552256 containerd[1520]: time="2025-03-21T12:40:07.552232200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 21 12:40:07.552256 containerd[1520]: time="2025-03-21T12:40:07.552242339Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 21 12:40:07.552362 containerd[1520]: time="2025-03-21T12:40:07.552295919Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 21 12:40:07.552362 containerd[1520]: time="2025-03-21T12:40:07.552309024Z" level=info msg="Start snapshots syncer" Mar 21 12:40:07.552362 containerd[1520]: time="2025-03-21T12:40:07.552344130Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 21 12:40:07.552679 containerd[1520]: time="2025-03-21T12:40:07.552636027Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 21 12:40:07.552793 containerd[1520]: time="2025-03-21T12:40:07.552683116Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 21 12:40:07.552793 containerd[1520]: time="2025-03-21T12:40:07.552746044Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 21 12:40:07.552923 containerd[1520]: time="2025-03-21T12:40:07.552886166Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 21 12:40:07.552923 containerd[1520]: time="2025-03-21T12:40:07.552919299Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 21 12:40:07.552972 containerd[1520]: time="2025-03-21T12:40:07.552939486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 21 12:40:07.552972 containerd[1520]: time="2025-03-21T12:40:07.552951198Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 21 12:40:07.552972 containerd[1520]: time="2025-03-21T12:40:07.552963201Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 21 12:40:07.552972 containerd[1520]: time="2025-03-21T12:40:07.552972809Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 21 12:40:07.553053 containerd[1520]: time="2025-03-21T12:40:07.552985152Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 21 12:40:07.553053 containerd[1520]: time="2025-03-21T12:40:07.553006723Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 21 12:40:07.553053 containerd[1520]: time="2025-03-21T12:40:07.553018996Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 21 12:40:07.553053 containerd[1520]: time="2025-03-21T12:40:07.553028664Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553831850Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553881233Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553891822Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553911279Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553919304Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553930605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553943820Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553963737Z" level=info msg="runtime interface created" Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553969999Z" level=info msg="created NRI interface" Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553981190Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.553998703Z" level=info msg="Connect containerd service" Mar 21 12:40:07.554385 containerd[1520]: time="2025-03-21T12:40:07.554049398Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 21 12:40:07.555750 containerd[1520]: time="2025-03-21T12:40:07.555729589Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:40:07.639924 containerd[1520]: time="2025-03-21T12:40:07.639800181Z" level=info msg="Start subscribing containerd event" Mar 21 12:40:07.639924 containerd[1520]: time="2025-03-21T12:40:07.639911370Z" level=info msg="Start recovering state" Mar 21 12:40:07.640048 containerd[1520]: time="2025-03-21T12:40:07.639943260Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 21 12:40:07.640048 containerd[1520]: time="2025-03-21T12:40:07.640007841Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 21 12:40:07.641483 containerd[1520]: time="2025-03-21T12:40:07.641459152Z" level=info msg="Start event monitor" Mar 21 12:40:07.641515 containerd[1520]: time="2025-03-21T12:40:07.641506551Z" level=info msg="Start cni network conf syncer for default" Mar 21 12:40:07.641536 containerd[1520]: time="2025-03-21T12:40:07.641515769Z" level=info msg="Start streaming server" Mar 21 12:40:07.641536 containerd[1520]: time="2025-03-21T12:40:07.641531698Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 21 12:40:07.641572 containerd[1520]: time="2025-03-21T12:40:07.641540285Z" level=info msg="runtime interface starting up..." Mar 21 12:40:07.641572 containerd[1520]: time="2025-03-21T12:40:07.641564560Z" level=info msg="starting plugins..." Mar 21 12:40:07.641624 containerd[1520]: time="2025-03-21T12:40:07.641582063Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 21 12:40:07.641909 systemd[1]: Started containerd.service - containerd container runtime. Mar 21 12:40:07.643343 containerd[1520]: time="2025-03-21T12:40:07.642218126Z" level=info msg="containerd successfully booted in 0.107863s" Mar 21 12:40:07.700566 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 21 12:40:07.724050 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 21 12:40:07.727142 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 21 12:40:07.746766 systemd[1]: issuegen.service: Deactivated successfully. Mar 21 12:40:07.747047 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 21 12:40:07.749854 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 21 12:40:07.754382 tar[1512]: linux-amd64/LICENSE Mar 21 12:40:07.754472 tar[1512]: linux-amd64/README.md Mar 21 12:40:07.776407 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 21 12:40:07.778214 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 21 12:40:07.781654 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 21 12:40:07.783990 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 21 12:40:07.785244 systemd[1]: Reached target getty.target - Login Prompts. Mar 21 12:40:08.158834 systemd-networkd[1436]: eth0: Gained IPv6LL Mar 21 12:40:08.161867 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 21 12:40:08.163709 systemd[1]: Reached target network-online.target - Network is Online. Mar 21 12:40:08.166329 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 21 12:40:08.168879 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:08.171473 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 21 12:40:08.211764 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 21 12:40:08.215063 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 21 12:40:08.215338 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 21 12:40:08.217004 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 21 12:40:08.804981 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:08.806734 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 21 12:40:08.808100 systemd[1]: Startup finished in 719ms (kernel) + 5.278s (initrd) + 3.860s (userspace) = 9.859s. Mar 21 12:40:08.808897 (kubelet)[1615]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:40:09.242264 kubelet[1615]: E0321 12:40:09.242068 1615 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:40:09.246180 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:40:09.246416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:40:09.246793 systemd[1]: kubelet.service: Consumed 921ms CPU time, 243.2M memory peak. Mar 21 12:40:13.173352 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 21 12:40:13.174634 systemd[1]: Started sshd@0-10.0.0.131:22-10.0.0.1:46246.service - OpenSSH per-connection server daemon (10.0.0.1:46246). Mar 21 12:40:13.235564 sshd[1629]: Accepted publickey for core from 10.0.0.1 port 46246 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:13.237577 sshd-session[1629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:13.248143 systemd-logind[1497]: New session 1 of user core. Mar 21 12:40:13.249452 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 21 12:40:13.250641 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 21 12:40:13.272907 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 21 12:40:13.275526 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 21 12:40:13.289470 (systemd)[1633]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 21 12:40:13.291772 systemd-logind[1497]: New session c1 of user core. Mar 21 12:40:13.442220 systemd[1633]: Queued start job for default target default.target. Mar 21 12:40:13.452654 systemd[1633]: Created slice app.slice - User Application Slice. Mar 21 12:40:13.452680 systemd[1633]: Reached target paths.target - Paths. Mar 21 12:40:13.452720 systemd[1633]: Reached target timers.target - Timers. Mar 21 12:40:13.454243 systemd[1633]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 21 12:40:13.465001 systemd[1633]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 21 12:40:13.465121 systemd[1633]: Reached target sockets.target - Sockets. Mar 21 12:40:13.465164 systemd[1633]: Reached target basic.target - Basic System. Mar 21 12:40:13.465207 systemd[1633]: Reached target default.target - Main User Target. Mar 21 12:40:13.465238 systemd[1633]: Startup finished in 166ms. Mar 21 12:40:13.465701 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 21 12:40:13.467537 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 21 12:40:13.538101 systemd[1]: Started sshd@1-10.0.0.131:22-10.0.0.1:46256.service - OpenSSH per-connection server daemon (10.0.0.1:46256). Mar 21 12:40:13.579976 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 46256 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:13.581474 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:13.585713 systemd-logind[1497]: New session 2 of user core. Mar 21 12:40:13.599487 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 21 12:40:13.651689 sshd[1646]: Connection closed by 10.0.0.1 port 46256 Mar 21 12:40:13.651988 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Mar 21 12:40:13.672228 systemd[1]: sshd@1-10.0.0.131:22-10.0.0.1:46256.service: Deactivated successfully. Mar 21 12:40:13.673980 systemd[1]: session-2.scope: Deactivated successfully. Mar 21 12:40:13.675566 systemd-logind[1497]: Session 2 logged out. Waiting for processes to exit. Mar 21 12:40:13.676815 systemd[1]: Started sshd@2-10.0.0.131:22-10.0.0.1:46272.service - OpenSSH per-connection server daemon (10.0.0.1:46272). Mar 21 12:40:13.677502 systemd-logind[1497]: Removed session 2. Mar 21 12:40:13.725072 sshd[1651]: Accepted publickey for core from 10.0.0.1 port 46272 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:13.726639 sshd-session[1651]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:13.730915 systemd-logind[1497]: New session 3 of user core. Mar 21 12:40:13.740469 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 21 12:40:13.790641 sshd[1654]: Connection closed by 10.0.0.1 port 46272 Mar 21 12:40:13.791013 sshd-session[1651]: pam_unix(sshd:session): session closed for user core Mar 21 12:40:13.800985 systemd[1]: sshd@2-10.0.0.131:22-10.0.0.1:46272.service: Deactivated successfully. Mar 21 12:40:13.802910 systemd[1]: session-3.scope: Deactivated successfully. Mar 21 12:40:13.804277 systemd-logind[1497]: Session 3 logged out. Waiting for processes to exit. Mar 21 12:40:13.805673 systemd[1]: Started sshd@3-10.0.0.131:22-10.0.0.1:46278.service - OpenSSH per-connection server daemon (10.0.0.1:46278). Mar 21 12:40:13.806392 systemd-logind[1497]: Removed session 3. Mar 21 12:40:13.851014 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 46278 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:13.852419 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:13.856331 systemd-logind[1497]: New session 4 of user core. Mar 21 12:40:13.873466 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 21 12:40:13.926496 sshd[1662]: Connection closed by 10.0.0.1 port 46278 Mar 21 12:40:13.926866 sshd-session[1659]: pam_unix(sshd:session): session closed for user core Mar 21 12:40:13.943830 systemd[1]: sshd@3-10.0.0.131:22-10.0.0.1:46278.service: Deactivated successfully. Mar 21 12:40:13.945438 systemd[1]: session-4.scope: Deactivated successfully. Mar 21 12:40:13.947090 systemd-logind[1497]: Session 4 logged out. Waiting for processes to exit. Mar 21 12:40:13.948423 systemd[1]: Started sshd@4-10.0.0.131:22-10.0.0.1:53736.service - OpenSSH per-connection server daemon (10.0.0.1:53736). Mar 21 12:40:13.949111 systemd-logind[1497]: Removed session 4. Mar 21 12:40:13.994468 sshd[1667]: Accepted publickey for core from 10.0.0.1 port 53736 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:13.995832 sshd-session[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:13.999960 systemd-logind[1497]: New session 5 of user core. Mar 21 12:40:14.009471 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 21 12:40:14.068561 sudo[1671]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 21 12:40:14.068935 sudo[1671]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:40:14.090636 sudo[1671]: pam_unix(sudo:session): session closed for user root Mar 21 12:40:14.092246 sshd[1670]: Connection closed by 10.0.0.1 port 53736 Mar 21 12:40:14.092720 sshd-session[1667]: pam_unix(sshd:session): session closed for user core Mar 21 12:40:14.111801 systemd[1]: sshd@4-10.0.0.131:22-10.0.0.1:53736.service: Deactivated successfully. Mar 21 12:40:14.114016 systemd[1]: session-5.scope: Deactivated successfully. Mar 21 12:40:14.114946 systemd-logind[1497]: Session 5 logged out. Waiting for processes to exit. Mar 21 12:40:14.117896 systemd[1]: Started sshd@5-10.0.0.131:22-10.0.0.1:53750.service - OpenSSH per-connection server daemon (10.0.0.1:53750). Mar 21 12:40:14.118402 systemd-logind[1497]: Removed session 5. Mar 21 12:40:14.169231 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 53750 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:14.171063 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:14.175600 systemd-logind[1497]: New session 6 of user core. Mar 21 12:40:14.191481 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 21 12:40:14.246817 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 21 12:40:14.247158 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:40:14.252452 sudo[1681]: pam_unix(sudo:session): session closed for user root Mar 21 12:40:14.259064 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 21 12:40:14.259410 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:40:14.270689 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:40:14.311271 augenrules[1703]: No rules Mar 21 12:40:14.313101 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:40:14.313382 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:40:14.314603 sudo[1680]: pam_unix(sudo:session): session closed for user root Mar 21 12:40:14.316319 sshd[1679]: Connection closed by 10.0.0.1 port 53750 Mar 21 12:40:14.316640 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Mar 21 12:40:14.325139 systemd[1]: sshd@5-10.0.0.131:22-10.0.0.1:53750.service: Deactivated successfully. Mar 21 12:40:14.327162 systemd[1]: session-6.scope: Deactivated successfully. Mar 21 12:40:14.328742 systemd-logind[1497]: Session 6 logged out. Waiting for processes to exit. Mar 21 12:40:14.330089 systemd[1]: Started sshd@6-10.0.0.131:22-10.0.0.1:53764.service - OpenSSH per-connection server daemon (10.0.0.1:53764). Mar 21 12:40:14.330838 systemd-logind[1497]: Removed session 6. Mar 21 12:40:14.384026 sshd[1711]: Accepted publickey for core from 10.0.0.1 port 53764 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:40:14.385410 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:40:14.389631 systemd-logind[1497]: New session 7 of user core. Mar 21 12:40:14.405535 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 21 12:40:14.458851 sudo[1715]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 21 12:40:14.459196 sudo[1715]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:40:14.752888 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 21 12:40:14.766660 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 21 12:40:15.014311 dockerd[1735]: time="2025-03-21T12:40:15.014159552Z" level=info msg="Starting up" Mar 21 12:40:15.016449 dockerd[1735]: time="2025-03-21T12:40:15.016399168Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 21 12:40:15.837267 dockerd[1735]: time="2025-03-21T12:40:15.837196080Z" level=info msg="Loading containers: start." Mar 21 12:40:16.012375 kernel: Initializing XFRM netlink socket Mar 21 12:40:16.088190 systemd-networkd[1436]: docker0: Link UP Mar 21 12:40:16.150781 dockerd[1735]: time="2025-03-21T12:40:16.150731350Z" level=info msg="Loading containers: done." Mar 21 12:40:16.164423 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3137629881-merged.mount: Deactivated successfully. Mar 21 12:40:16.167142 dockerd[1735]: time="2025-03-21T12:40:16.167103238Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 21 12:40:16.167208 dockerd[1735]: time="2025-03-21T12:40:16.167178735Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 21 12:40:16.167333 dockerd[1735]: time="2025-03-21T12:40:16.167308128Z" level=info msg="Daemon has completed initialization" Mar 21 12:40:16.203383 dockerd[1735]: time="2025-03-21T12:40:16.203312708Z" level=info msg="API listen on /run/docker.sock" Mar 21 12:40:16.203416 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 21 12:40:17.071226 containerd[1520]: time="2025-03-21T12:40:17.071179382Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 21 12:40:18.846436 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2107843801.mount: Deactivated successfully. Mar 21 12:40:19.496789 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 21 12:40:19.498949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:19.671728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:19.679615 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:40:19.725136 kubelet[2013]: E0321 12:40:19.725077 2013 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:40:19.731896 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:40:19.732086 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:40:19.732437 systemd[1]: kubelet.service: Consumed 209ms CPU time, 98.3M memory peak. Mar 21 12:40:20.384107 containerd[1520]: time="2025-03-21T12:40:20.384014762Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:20.411202 containerd[1520]: time="2025-03-21T12:40:20.411144657Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32674573" Mar 21 12:40:20.431199 containerd[1520]: time="2025-03-21T12:40:20.431142452Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:20.448212 containerd[1520]: time="2025-03-21T12:40:20.448113225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:20.448793 containerd[1520]: time="2025-03-21T12:40:20.448757867Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 3.377538301s" Mar 21 12:40:20.448848 containerd[1520]: time="2025-03-21T12:40:20.448796280Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 21 12:40:20.467189 containerd[1520]: time="2025-03-21T12:40:20.467147097Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 21 12:40:22.584796 containerd[1520]: time="2025-03-21T12:40:22.584707076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:22.585663 containerd[1520]: time="2025-03-21T12:40:22.585565827Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29619772" Mar 21 12:40:22.586895 containerd[1520]: time="2025-03-21T12:40:22.586860088Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:22.589730 containerd[1520]: time="2025-03-21T12:40:22.589682757Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:22.590776 containerd[1520]: time="2025-03-21T12:40:22.590737933Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 2.12354423s" Mar 21 12:40:22.590812 containerd[1520]: time="2025-03-21T12:40:22.590779087Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 21 12:40:22.612115 containerd[1520]: time="2025-03-21T12:40:22.612076123Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 21 12:40:23.692070 containerd[1520]: time="2025-03-21T12:40:23.692001511Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:23.693003 containerd[1520]: time="2025-03-21T12:40:23.692947500Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17903309" Mar 21 12:40:23.694273 containerd[1520]: time="2025-03-21T12:40:23.694212819Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:23.697006 containerd[1520]: time="2025-03-21T12:40:23.696965409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:23.697817 containerd[1520]: time="2025-03-21T12:40:23.697776014Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.085661798s" Mar 21 12:40:23.697817 containerd[1520]: time="2025-03-21T12:40:23.697806220Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 21 12:40:23.716774 containerd[1520]: time="2025-03-21T12:40:23.716731412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 21 12:40:24.753281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1750129112.mount: Deactivated successfully. Mar 21 12:40:25.382492 containerd[1520]: time="2025-03-21T12:40:25.382417098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:25.383115 containerd[1520]: time="2025-03-21T12:40:25.383030400Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 21 12:40:25.384163 containerd[1520]: time="2025-03-21T12:40:25.384119375Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:25.385908 containerd[1520]: time="2025-03-21T12:40:25.385873010Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:25.386311 containerd[1520]: time="2025-03-21T12:40:25.386269433Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.669502402s" Mar 21 12:40:25.386364 containerd[1520]: time="2025-03-21T12:40:25.386309540Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 21 12:40:25.406656 containerd[1520]: time="2025-03-21T12:40:25.406606604Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 21 12:40:25.940480 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount536886734.mount: Deactivated successfully. Mar 21 12:40:26.574995 containerd[1520]: time="2025-03-21T12:40:26.574937955Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:26.575894 containerd[1520]: time="2025-03-21T12:40:26.575825318Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Mar 21 12:40:26.577103 containerd[1520]: time="2025-03-21T12:40:26.577062184Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:26.579390 containerd[1520]: time="2025-03-21T12:40:26.579355126Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:26.580209 containerd[1520]: time="2025-03-21T12:40:26.580179268Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.173533684s" Mar 21 12:40:26.580252 containerd[1520]: time="2025-03-21T12:40:26.580210436Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 21 12:40:26.598526 containerd[1520]: time="2025-03-21T12:40:26.598493827Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 21 12:40:27.191495 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1005218875.mount: Deactivated successfully. Mar 21 12:40:27.197387 containerd[1520]: time="2025-03-21T12:40:27.197327742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:27.198156 containerd[1520]: time="2025-03-21T12:40:27.198113307Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Mar 21 12:40:27.199347 containerd[1520]: time="2025-03-21T12:40:27.199301193Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:27.201531 containerd[1520]: time="2025-03-21T12:40:27.201497356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:27.202114 containerd[1520]: time="2025-03-21T12:40:27.202074655Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 603.545203ms" Mar 21 12:40:27.202149 containerd[1520]: time="2025-03-21T12:40:27.202113433Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 21 12:40:27.220052 containerd[1520]: time="2025-03-21T12:40:27.220005273Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 21 12:40:27.730504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3675661616.mount: Deactivated successfully. Mar 21 12:40:29.537876 containerd[1520]: time="2025-03-21T12:40:29.537811170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:29.538597 containerd[1520]: time="2025-03-21T12:40:29.538524404Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Mar 21 12:40:29.539740 containerd[1520]: time="2025-03-21T12:40:29.539708463Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:29.542343 containerd[1520]: time="2025-03-21T12:40:29.542297528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:29.543280 containerd[1520]: time="2025-03-21T12:40:29.543234516Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.323199863s" Mar 21 12:40:29.543316 containerd[1520]: time="2025-03-21T12:40:29.543280219Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 21 12:40:29.825553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 21 12:40:29.827221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:29.993573 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:30.010680 (kubelet)[2210]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:40:30.120880 kubelet[2210]: E0321 12:40:30.120752 2210 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:40:30.125282 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:40:30.125506 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:40:30.125873 systemd[1]: kubelet.service: Consumed 205ms CPU time, 98M memory peak. Mar 21 12:40:32.127503 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:32.127734 systemd[1]: kubelet.service: Consumed 205ms CPU time, 98M memory peak. Mar 21 12:40:32.130606 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:32.161417 systemd[1]: Reload requested from client PID 2301 ('systemctl') (unit session-7.scope)... Mar 21 12:40:32.161429 systemd[1]: Reloading... Mar 21 12:40:32.255370 zram_generator::config[2350]: No configuration found. Mar 21 12:40:32.591720 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:40:32.700468 systemd[1]: Reloading finished in 538 ms. Mar 21 12:40:32.767186 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Mar 21 12:40:32.767307 systemd[1]: kubelet.service: Failed with result 'signal'. Mar 21 12:40:32.767702 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:32.767755 systemd[1]: kubelet.service: Consumed 139ms CPU time, 83.6M memory peak. Mar 21 12:40:32.770742 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:32.960209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:32.964752 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:40:33.001403 kubelet[2393]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:40:33.001403 kubelet[2393]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:40:33.001403 kubelet[2393]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:40:33.001774 kubelet[2393]: I0321 12:40:33.001442 2393 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:40:33.248834 kubelet[2393]: I0321 12:40:33.248726 2393 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:40:33.248834 kubelet[2393]: I0321 12:40:33.248753 2393 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:40:33.249042 kubelet[2393]: I0321 12:40:33.248945 2393 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:40:33.263065 kubelet[2393]: E0321 12:40:33.263031 2393 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.264042 kubelet[2393]: I0321 12:40:33.264016 2393 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:40:33.277282 kubelet[2393]: I0321 12:40:33.277240 2393 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:40:33.278997 kubelet[2393]: I0321 12:40:33.278957 2393 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:40:33.279186 kubelet[2393]: I0321 12:40:33.278992 2393 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:40:33.279287 kubelet[2393]: I0321 12:40:33.279201 2393 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:40:33.279287 kubelet[2393]: I0321 12:40:33.279211 2393 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:40:33.279396 kubelet[2393]: I0321 12:40:33.279376 2393 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:40:33.279988 kubelet[2393]: I0321 12:40:33.279968 2393 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:40:33.279988 kubelet[2393]: I0321 12:40:33.279983 2393 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:40:33.280033 kubelet[2393]: I0321 12:40:33.280004 2393 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:40:33.280033 kubelet[2393]: I0321 12:40:33.280024 2393 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:40:33.280576 kubelet[2393]: W0321 12:40:33.280533 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.280687 kubelet[2393]: E0321 12:40:33.280655 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.281937 kubelet[2393]: W0321 12:40:33.281890 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.281937 kubelet[2393]: E0321 12:40:33.281932 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.283997 kubelet[2393]: I0321 12:40:33.283968 2393 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:40:33.285141 kubelet[2393]: I0321 12:40:33.285119 2393 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:40:33.285198 kubelet[2393]: W0321 12:40:33.285176 2393 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 21 12:40:33.285837 kubelet[2393]: I0321 12:40:33.285813 2393 server.go:1264] "Started kubelet" Mar 21 12:40:33.286973 kubelet[2393]: I0321 12:40:33.286201 2393 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:40:33.286973 kubelet[2393]: I0321 12:40:33.286587 2393 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:40:33.286973 kubelet[2393]: I0321 12:40:33.286620 2393 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:40:33.287124 kubelet[2393]: I0321 12:40:33.287092 2393 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:40:33.288629 kubelet[2393]: I0321 12:40:33.287475 2393 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:40:33.289357 kubelet[2393]: E0321 12:40:33.289106 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:33.289357 kubelet[2393]: I0321 12:40:33.289144 2393 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:40:33.289357 kubelet[2393]: I0321 12:40:33.289231 2393 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:40:33.289357 kubelet[2393]: I0321 12:40:33.289274 2393 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:40:33.290157 kubelet[2393]: W0321 12:40:33.289573 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.290157 kubelet[2393]: E0321 12:40:33.289611 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.290157 kubelet[2393]: E0321 12:40:33.289961 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="200ms" Mar 21 12:40:33.290407 kubelet[2393]: E0321 12:40:33.290375 2393 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:40:33.290492 kubelet[2393]: I0321 12:40:33.290417 2393 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:40:33.290516 kubelet[2393]: I0321 12:40:33.290496 2393 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:40:33.291257 kubelet[2393]: E0321 12:40:33.291163 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed1d86b822376 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:40:33.28579263 +0000 UTC m=+0.317354727,LastTimestamp:2025-03-21 12:40:33.28579263 +0000 UTC m=+0.317354727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:40:33.291488 kubelet[2393]: I0321 12:40:33.291470 2393 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:40:33.304081 kubelet[2393]: I0321 12:40:33.303415 2393 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:40:33.304081 kubelet[2393]: I0321 12:40:33.303437 2393 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:40:33.304081 kubelet[2393]: I0321 12:40:33.303473 2393 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:40:33.305884 kubelet[2393]: I0321 12:40:33.305850 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:40:33.307262 kubelet[2393]: I0321 12:40:33.307242 2393 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:40:33.307323 kubelet[2393]: I0321 12:40:33.307268 2393 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:40:33.307323 kubelet[2393]: I0321 12:40:33.307288 2393 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:40:33.307323 kubelet[2393]: E0321 12:40:33.307322 2393 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:40:33.308420 kubelet[2393]: W0321 12:40:33.307948 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.308420 kubelet[2393]: E0321 12:40:33.307979 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:33.390888 kubelet[2393]: I0321 12:40:33.390844 2393 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:40:33.391153 kubelet[2393]: E0321 12:40:33.391121 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 21 12:40:33.408358 kubelet[2393]: E0321 12:40:33.408281 2393 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 21 12:40:33.490978 kubelet[2393]: E0321 12:40:33.490923 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="400ms" Mar 21 12:40:33.592171 kubelet[2393]: I0321 12:40:33.592145 2393 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:40:33.592431 kubelet[2393]: E0321 12:40:33.592395 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 21 12:40:33.608589 kubelet[2393]: E0321 12:40:33.608545 2393 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 21 12:40:33.668023 kubelet[2393]: I0321 12:40:33.667983 2393 policy_none.go:49] "None policy: Start" Mar 21 12:40:33.668695 kubelet[2393]: I0321 12:40:33.668658 2393 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:40:33.668695 kubelet[2393]: I0321 12:40:33.668689 2393 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:40:33.675585 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 21 12:40:33.689613 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 21 12:40:33.692582 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 21 12:40:33.707246 kubelet[2393]: I0321 12:40:33.707206 2393 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:40:33.707618 kubelet[2393]: I0321 12:40:33.707456 2393 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:40:33.707618 kubelet[2393]: I0321 12:40:33.707569 2393 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:40:33.708549 kubelet[2393]: E0321 12:40:33.708504 2393 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 21 12:40:33.892024 kubelet[2393]: E0321 12:40:33.891860 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="800ms" Mar 21 12:40:33.994466 kubelet[2393]: I0321 12:40:33.994426 2393 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:40:33.994850 kubelet[2393]: E0321 12:40:33.994817 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 21 12:40:34.008951 kubelet[2393]: I0321 12:40:34.008907 2393 topology_manager.go:215] "Topology Admit Handler" podUID="57001f3912841fb47bb035bc94456d14" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:40:34.009980 kubelet[2393]: I0321 12:40:34.009911 2393 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:40:34.010984 kubelet[2393]: I0321 12:40:34.010952 2393 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:40:34.017133 systemd[1]: Created slice kubepods-burstable-pod57001f3912841fb47bb035bc94456d14.slice - libcontainer container kubepods-burstable-pod57001f3912841fb47bb035bc94456d14.slice. Mar 21 12:40:34.032165 systemd[1]: Created slice kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice - libcontainer container kubepods-burstable-pod23a18e2dc14f395c5f1bea711a5a9344.slice. Mar 21 12:40:34.036772 systemd[1]: Created slice kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice - libcontainer container kubepods-burstable-podd79ab404294384d4bcc36fb5b5509bbb.slice. Mar 21 12:40:34.093673 kubelet[2393]: I0321 12:40:34.093626 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57001f3912841fb47bb035bc94456d14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57001f3912841fb47bb035bc94456d14\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:34.093673 kubelet[2393]: I0321 12:40:34.093672 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57001f3912841fb47bb035bc94456d14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57001f3912841fb47bb035bc94456d14\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:34.093815 kubelet[2393]: I0321 12:40:34.093697 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:34.093815 kubelet[2393]: I0321 12:40:34.093740 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:34.093815 kubelet[2393]: I0321 12:40:34.093761 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:34.093815 kubelet[2393]: I0321 12:40:34.093781 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:40:34.093815 kubelet[2393]: I0321 12:40:34.093801 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57001f3912841fb47bb035bc94456d14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57001f3912841fb47bb035bc94456d14\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:34.093932 kubelet[2393]: I0321 12:40:34.093822 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:34.093932 kubelet[2393]: I0321 12:40:34.093843 2393 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:34.331288 containerd[1520]: time="2025-03-21T12:40:34.331228452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57001f3912841fb47bb035bc94456d14,Namespace:kube-system,Attempt:0,}" Mar 21 12:40:34.334805 containerd[1520]: time="2025-03-21T12:40:34.334761910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,}" Mar 21 12:40:34.339492 containerd[1520]: time="2025-03-21T12:40:34.339453988Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,}" Mar 21 12:40:34.407216 kubelet[2393]: W0321 12:40:34.407156 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.407216 kubelet[2393]: E0321 12:40:34.407218 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.0.0.131:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.486865 kubelet[2393]: W0321 12:40:34.486803 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.486972 kubelet[2393]: E0321 12:40:34.486874 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.0.0.131:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.679649 kubelet[2393]: W0321 12:40:34.679531 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.679649 kubelet[2393]: E0321 12:40:34.679587 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.0.0.131:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.692989 kubelet[2393]: E0321 12:40:34.692961 2393 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.131:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.131:6443: connect: connection refused" interval="1.6s" Mar 21 12:40:34.796296 kubelet[2393]: I0321 12:40:34.796252 2393 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:40:34.796746 kubelet[2393]: E0321 12:40:34.796694 2393 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.0.0.131:6443/api/v1/nodes\": dial tcp 10.0.0.131:6443: connect: connection refused" node="localhost" Mar 21 12:40:34.826231 kubelet[2393]: W0321 12:40:34.826198 2393 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:34.826231 kubelet[2393]: E0321 12:40:34.826231 2393 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.0.0.131:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:35.371835 kubelet[2393]: E0321 12:40:35.371737 2393 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.131:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.131:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed1d86b822376 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:40:33.28579263 +0000 UTC m=+0.317354727,LastTimestamp:2025-03-21 12:40:33.28579263 +0000 UTC m=+0.317354727,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:40:35.447844 kubelet[2393]: E0321 12:40:35.447815 2393 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.0.0.131:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.0.0.131:6443: connect: connection refused Mar 21 12:40:35.506957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount158055622.mount: Deactivated successfully. Mar 21 12:40:35.512533 containerd[1520]: time="2025-03-21T12:40:35.512488840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:40:35.515398 containerd[1520]: time="2025-03-21T12:40:35.515309975Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 21 12:40:35.516482 containerd[1520]: time="2025-03-21T12:40:35.516436734Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:40:35.518506 containerd[1520]: time="2025-03-21T12:40:35.518450372Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:40:35.519472 containerd[1520]: time="2025-03-21T12:40:35.519427491Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:40:35.520415 containerd[1520]: time="2025-03-21T12:40:35.520381419Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:40:35.521292 containerd[1520]: time="2025-03-21T12:40:35.521240858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 21 12:40:35.523304 containerd[1520]: time="2025-03-21T12:40:35.523261484Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:40:35.524737 containerd[1520]: time="2025-03-21T12:40:35.524691566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.187741767s" Mar 21 12:40:35.525431 containerd[1520]: time="2025-03-21T12:40:35.525392471Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.181974219s" Mar 21 12:40:35.526131 containerd[1520]: time="2025-03-21T12:40:35.526091020Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.192238626s" Mar 21 12:40:35.557789 containerd[1520]: time="2025-03-21T12:40:35.557642157Z" level=info msg="connecting to shim ab969875de23f551a7a6b89492222affd0675ae752197f382935d1c06e6f21d9" address="unix:///run/containerd/s/6d58329b03e6fe6d99fe7ceec923f20439a43651f5762217575a1887ad781404" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:40:35.560255 containerd[1520]: time="2025-03-21T12:40:35.560225225Z" level=info msg="connecting to shim 7a1669e6182f092500804fca53b7b94f7244c1ddede6be7fb5d05214890b7c48" address="unix:///run/containerd/s/cf72477f92725771ad71631f3f708be41a1a377179b2571d56ae5fb37d36bc2d" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:40:35.563805 containerd[1520]: time="2025-03-21T12:40:35.562879414Z" level=info msg="connecting to shim 71ae597d82a400591d47396e492af7434bc6c1b6b41ccb227aad49ba5851ad88" address="unix:///run/containerd/s/32a355238df4ede7a605d4b79d68471de04529214d46ebe18207eba93f6fc200" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:40:35.586458 systemd[1]: Started cri-containerd-7a1669e6182f092500804fca53b7b94f7244c1ddede6be7fb5d05214890b7c48.scope - libcontainer container 7a1669e6182f092500804fca53b7b94f7244c1ddede6be7fb5d05214890b7c48. Mar 21 12:40:35.590707 systemd[1]: Started cri-containerd-71ae597d82a400591d47396e492af7434bc6c1b6b41ccb227aad49ba5851ad88.scope - libcontainer container 71ae597d82a400591d47396e492af7434bc6c1b6b41ccb227aad49ba5851ad88. Mar 21 12:40:35.592370 systemd[1]: Started cri-containerd-ab969875de23f551a7a6b89492222affd0675ae752197f382935d1c06e6f21d9.scope - libcontainer container ab969875de23f551a7a6b89492222affd0675ae752197f382935d1c06e6f21d9. Mar 21 12:40:35.631560 containerd[1520]: time="2025-03-21T12:40:35.631426925Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:23a18e2dc14f395c5f1bea711a5a9344,Namespace:kube-system,Attempt:0,} returns sandbox id \"7a1669e6182f092500804fca53b7b94f7244c1ddede6be7fb5d05214890b7c48\"" Mar 21 12:40:35.637179 containerd[1520]: time="2025-03-21T12:40:35.636537400Z" level=info msg="CreateContainer within sandbox \"7a1669e6182f092500804fca53b7b94f7244c1ddede6be7fb5d05214890b7c48\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 21 12:40:35.641605 containerd[1520]: time="2025-03-21T12:40:35.641560432Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:57001f3912841fb47bb035bc94456d14,Namespace:kube-system,Attempt:0,} returns sandbox id \"71ae597d82a400591d47396e492af7434bc6c1b6b41ccb227aad49ba5851ad88\"" Mar 21 12:40:35.642434 containerd[1520]: time="2025-03-21T12:40:35.642399468Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d79ab404294384d4bcc36fb5b5509bbb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab969875de23f551a7a6b89492222affd0675ae752197f382935d1c06e6f21d9\"" Mar 21 12:40:35.644645 containerd[1520]: time="2025-03-21T12:40:35.644611662Z" level=info msg="CreateContainer within sandbox \"71ae597d82a400591d47396e492af7434bc6c1b6b41ccb227aad49ba5851ad88\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 21 12:40:35.644959 containerd[1520]: time="2025-03-21T12:40:35.644832083Z" level=info msg="CreateContainer within sandbox \"ab969875de23f551a7a6b89492222affd0675ae752197f382935d1c06e6f21d9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 21 12:40:35.653754 containerd[1520]: time="2025-03-21T12:40:35.653713110Z" level=info msg="Container a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:40:35.659767 containerd[1520]: time="2025-03-21T12:40:35.659736528Z" level=info msg="Container 4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:40:35.663426 containerd[1520]: time="2025-03-21T12:40:35.663392850Z" level=info msg="Container 3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:40:35.667126 containerd[1520]: time="2025-03-21T12:40:35.667087769Z" level=info msg="CreateContainer within sandbox \"7a1669e6182f092500804fca53b7b94f7244c1ddede6be7fb5d05214890b7c48\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c\"" Mar 21 12:40:35.667739 containerd[1520]: time="2025-03-21T12:40:35.667702063Z" level=info msg="StartContainer for \"a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c\"" Mar 21 12:40:35.668772 containerd[1520]: time="2025-03-21T12:40:35.668742011Z" level=info msg="connecting to shim a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c" address="unix:///run/containerd/s/cf72477f92725771ad71631f3f708be41a1a377179b2571d56ae5fb37d36bc2d" protocol=ttrpc version=3 Mar 21 12:40:35.671487 containerd[1520]: time="2025-03-21T12:40:35.671453226Z" level=info msg="CreateContainer within sandbox \"71ae597d82a400591d47396e492af7434bc6c1b6b41ccb227aad49ba5851ad88\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6\"" Mar 21 12:40:35.672350 containerd[1520]: time="2025-03-21T12:40:35.672303601Z" level=info msg="StartContainer for \"4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6\"" Mar 21 12:40:35.673233 containerd[1520]: time="2025-03-21T12:40:35.673202912Z" level=info msg="connecting to shim 4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6" address="unix:///run/containerd/s/32a355238df4ede7a605d4b79d68471de04529214d46ebe18207eba93f6fc200" protocol=ttrpc version=3 Mar 21 12:40:35.674896 containerd[1520]: time="2025-03-21T12:40:35.674822285Z" level=info msg="CreateContainer within sandbox \"ab969875de23f551a7a6b89492222affd0675ae752197f382935d1c06e6f21d9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d\"" Mar 21 12:40:35.675719 containerd[1520]: time="2025-03-21T12:40:35.675688642Z" level=info msg="StartContainer for \"3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d\"" Mar 21 12:40:35.676693 containerd[1520]: time="2025-03-21T12:40:35.676664246Z" level=info msg="connecting to shim 3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d" address="unix:///run/containerd/s/6d58329b03e6fe6d99fe7ceec923f20439a43651f5762217575a1887ad781404" protocol=ttrpc version=3 Mar 21 12:40:35.687489 systemd[1]: Started cri-containerd-a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c.scope - libcontainer container a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c. Mar 21 12:40:35.690636 systemd[1]: Started cri-containerd-4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6.scope - libcontainer container 4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6. Mar 21 12:40:35.695987 systemd[1]: Started cri-containerd-3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d.scope - libcontainer container 3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d. Mar 21 12:40:35.739280 containerd[1520]: time="2025-03-21T12:40:35.739236150Z" level=info msg="StartContainer for \"a64a3a5d14727659f7648c6950837da99f32244be91b12fe8e3e5b86c5f2351c\" returns successfully" Mar 21 12:40:35.749854 containerd[1520]: time="2025-03-21T12:40:35.749666101Z" level=info msg="StartContainer for \"3c685212efa7f36f8a7e948fdd94352f88505d913d5594a369c9b6ee9778fd9d\" returns successfully" Mar 21 12:40:35.751584 containerd[1520]: time="2025-03-21T12:40:35.751544204Z" level=info msg="StartContainer for \"4811bfb1bb8838d2e66fd61325686d88d761ae64521fa4d0bb97b1be1e53dbf6\" returns successfully" Mar 21 12:40:36.398771 kubelet[2393]: I0321 12:40:36.398710 2393 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:40:36.712844 kubelet[2393]: E0321 12:40:36.712724 2393 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 21 12:40:36.797385 kubelet[2393]: I0321 12:40:36.797322 2393 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:40:36.806215 kubelet[2393]: E0321 12:40:36.805584 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:36.906529 kubelet[2393]: E0321 12:40:36.906488 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:37.007026 kubelet[2393]: E0321 12:40:37.006923 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:37.107482 kubelet[2393]: E0321 12:40:37.107454 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:37.207958 kubelet[2393]: E0321 12:40:37.207927 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:37.308185 kubelet[2393]: E0321 12:40:37.308078 2393 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:40:38.282610 kubelet[2393]: I0321 12:40:38.282538 2393 apiserver.go:52] "Watching apiserver" Mar 21 12:40:38.289477 kubelet[2393]: I0321 12:40:38.289420 2393 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:40:38.899685 systemd[1]: Reload requested from client PID 2667 ('systemctl') (unit session-7.scope)... Mar 21 12:40:38.899703 systemd[1]: Reloading... Mar 21 12:40:38.982378 zram_generator::config[2711]: No configuration found. Mar 21 12:40:39.123220 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:40:39.256192 systemd[1]: Reloading finished in 356 ms. Mar 21 12:40:39.286403 kubelet[2393]: I0321 12:40:39.286369 2393 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:40:39.286530 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:39.301743 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:40:39.302080 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:39.302134 systemd[1]: kubelet.service: Consumed 763ms CPU time, 118.8M memory peak. Mar 21 12:40:39.304168 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:40:39.518947 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:40:39.528893 (kubelet)[2756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:40:39.579824 kubelet[2756]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:40:39.579824 kubelet[2756]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 21 12:40:39.579824 kubelet[2756]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:40:39.580355 kubelet[2756]: I0321 12:40:39.579855 2756 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:40:39.585306 kubelet[2756]: I0321 12:40:39.585250 2756 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 21 12:40:39.585306 kubelet[2756]: I0321 12:40:39.585287 2756 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:40:39.585600 kubelet[2756]: I0321 12:40:39.585573 2756 server.go:927] "Client rotation is on, will bootstrap in background" Mar 21 12:40:39.587197 kubelet[2756]: I0321 12:40:39.587166 2756 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 12:40:39.588789 kubelet[2756]: I0321 12:40:39.588755 2756 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:40:39.600037 kubelet[2756]: I0321 12:40:39.599995 2756 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:40:39.600339 kubelet[2756]: I0321 12:40:39.600294 2756 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:40:39.600573 kubelet[2756]: I0321 12:40:39.600354 2756 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 21 12:40:39.600676 kubelet[2756]: I0321 12:40:39.600580 2756 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:40:39.600676 kubelet[2756]: I0321 12:40:39.600593 2756 container_manager_linux.go:301] "Creating device plugin manager" Mar 21 12:40:39.600676 kubelet[2756]: I0321 12:40:39.600638 2756 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:40:39.600787 kubelet[2756]: I0321 12:40:39.600765 2756 kubelet.go:400] "Attempting to sync node with API server" Mar 21 12:40:39.600836 kubelet[2756]: I0321 12:40:39.600795 2756 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:40:39.600870 kubelet[2756]: I0321 12:40:39.600837 2756 kubelet.go:312] "Adding apiserver pod source" Mar 21 12:40:39.600902 kubelet[2756]: I0321 12:40:39.600879 2756 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:40:39.601859 kubelet[2756]: I0321 12:40:39.601828 2756 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:40:39.602126 kubelet[2756]: I0321 12:40:39.602065 2756 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:40:39.602578 kubelet[2756]: I0321 12:40:39.602561 2756 server.go:1264] "Started kubelet" Mar 21 12:40:39.602892 kubelet[2756]: I0321 12:40:39.602791 2756 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:40:39.602892 kubelet[2756]: I0321 12:40:39.602847 2756 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:40:39.603572 kubelet[2756]: I0321 12:40:39.603148 2756 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:40:39.604033 kubelet[2756]: I0321 12:40:39.603992 2756 server.go:455] "Adding debug handlers to kubelet server" Mar 21 12:40:39.608365 kubelet[2756]: I0321 12:40:39.606483 2756 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:40:39.613439 kubelet[2756]: I0321 12:40:39.613414 2756 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 21 12:40:39.613836 kubelet[2756]: E0321 12:40:39.613790 2756 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:40:39.614316 kubelet[2756]: I0321 12:40:39.614302 2756 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:40:39.614652 kubelet[2756]: I0321 12:40:39.614640 2756 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:40:39.614807 kubelet[2756]: I0321 12:40:39.614755 2756 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:40:39.614893 kubelet[2756]: I0321 12:40:39.614860 2756 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:40:39.619079 kubelet[2756]: I0321 12:40:39.619047 2756 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:40:39.621957 kubelet[2756]: I0321 12:40:39.621900 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:40:39.623296 kubelet[2756]: I0321 12:40:39.623270 2756 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:40:39.623371 kubelet[2756]: I0321 12:40:39.623309 2756 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 21 12:40:39.623371 kubelet[2756]: I0321 12:40:39.623343 2756 kubelet.go:2337] "Starting kubelet main sync loop" Mar 21 12:40:39.623429 kubelet[2756]: E0321 12:40:39.623391 2756 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:40:39.660405 kubelet[2756]: I0321 12:40:39.660372 2756 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 21 12:40:39.660405 kubelet[2756]: I0321 12:40:39.660395 2756 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 21 12:40:39.660561 kubelet[2756]: I0321 12:40:39.660427 2756 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:40:39.660654 kubelet[2756]: I0321 12:40:39.660636 2756 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 21 12:40:39.660694 kubelet[2756]: I0321 12:40:39.660654 2756 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 21 12:40:39.660694 kubelet[2756]: I0321 12:40:39.660676 2756 policy_none.go:49] "None policy: Start" Mar 21 12:40:39.661278 kubelet[2756]: I0321 12:40:39.661228 2756 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 21 12:40:39.661278 kubelet[2756]: I0321 12:40:39.661256 2756 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:40:39.661482 kubelet[2756]: I0321 12:40:39.661419 2756 state_mem.go:75] "Updated machine memory state" Mar 21 12:40:39.665954 kubelet[2756]: I0321 12:40:39.665807 2756 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:40:39.666199 kubelet[2756]: I0321 12:40:39.666024 2756 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:40:39.666199 kubelet[2756]: I0321 12:40:39.666149 2756 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:40:39.716193 kubelet[2756]: I0321 12:40:39.716121 2756 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Mar 21 12:40:39.725402 kubelet[2756]: I0321 12:40:39.724553 2756 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Mar 21 12:40:39.725402 kubelet[2756]: I0321 12:40:39.724643 2756 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Mar 21 12:40:39.725402 kubelet[2756]: I0321 12:40:39.724633 2756 topology_manager.go:215] "Topology Admit Handler" podUID="57001f3912841fb47bb035bc94456d14" podNamespace="kube-system" podName="kube-apiserver-localhost" Mar 21 12:40:39.725402 kubelet[2756]: I0321 12:40:39.724743 2756 topology_manager.go:215] "Topology Admit Handler" podUID="23a18e2dc14f395c5f1bea711a5a9344" podNamespace="kube-system" podName="kube-controller-manager-localhost" Mar 21 12:40:39.725402 kubelet[2756]: I0321 12:40:39.724805 2756 topology_manager.go:215] "Topology Admit Handler" podUID="d79ab404294384d4bcc36fb5b5509bbb" podNamespace="kube-system" podName="kube-scheduler-localhost" Mar 21 12:40:39.815244 kubelet[2756]: I0321 12:40:39.815079 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/57001f3912841fb47bb035bc94456d14-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"57001f3912841fb47bb035bc94456d14\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:39.815244 kubelet[2756]: I0321 12:40:39.815117 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/57001f3912841fb47bb035bc94456d14-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"57001f3912841fb47bb035bc94456d14\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:39.815244 kubelet[2756]: I0321 12:40:39.815138 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:39.815244 kubelet[2756]: I0321 12:40:39.815153 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:39.815244 kubelet[2756]: I0321 12:40:39.815169 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:39.815569 kubelet[2756]: I0321 12:40:39.815183 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/57001f3912841fb47bb035bc94456d14-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"57001f3912841fb47bb035bc94456d14\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:39.815569 kubelet[2756]: I0321 12:40:39.815221 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:39.815569 kubelet[2756]: I0321 12:40:39.815238 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/23a18e2dc14f395c5f1bea711a5a9344-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"23a18e2dc14f395c5f1bea711a5a9344\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:40:39.815569 kubelet[2756]: I0321 12:40:39.815263 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d79ab404294384d4bcc36fb5b5509bbb-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d79ab404294384d4bcc36fb5b5509bbb\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:40:40.602117 kubelet[2756]: I0321 12:40:40.602061 2756 apiserver.go:52] "Watching apiserver" Mar 21 12:40:40.615517 kubelet[2756]: I0321 12:40:40.615457 2756 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:40:40.649162 kubelet[2756]: E0321 12:40:40.649122 2756 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 21 12:40:40.664300 kubelet[2756]: I0321 12:40:40.662915 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.662898664 podStartE2EDuration="1.662898664s" podCreationTimestamp="2025-03-21 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:40:40.660878834 +0000 UTC m=+1.126918828" watchObservedRunningTime="2025-03-21 12:40:40.662898664 +0000 UTC m=+1.128938658" Mar 21 12:40:40.690950 kubelet[2756]: I0321 12:40:40.690870 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.690789488 podStartE2EDuration="1.690789488s" podCreationTimestamp="2025-03-21 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:40:40.6766434 +0000 UTC m=+1.142683394" watchObservedRunningTime="2025-03-21 12:40:40.690789488 +0000 UTC m=+1.156829492" Mar 21 12:40:40.691137 kubelet[2756]: I0321 12:40:40.690986 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.690979348 podStartE2EDuration="1.690979348s" podCreationTimestamp="2025-03-21 12:40:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:40:40.690658567 +0000 UTC m=+1.156698561" watchObservedRunningTime="2025-03-21 12:40:40.690979348 +0000 UTC m=+1.157019362" Mar 21 12:40:44.219833 sudo[1715]: pam_unix(sudo:session): session closed for user root Mar 21 12:40:44.221309 sshd[1714]: Connection closed by 10.0.0.1 port 53764 Mar 21 12:40:44.221717 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Mar 21 12:40:44.226963 systemd[1]: sshd@6-10.0.0.131:22-10.0.0.1:53764.service: Deactivated successfully. Mar 21 12:40:44.229826 systemd[1]: session-7.scope: Deactivated successfully. Mar 21 12:40:44.230093 systemd[1]: session-7.scope: Consumed 4.526s CPU time, 234.6M memory peak. Mar 21 12:40:44.231450 systemd-logind[1497]: Session 7 logged out. Waiting for processes to exit. Mar 21 12:40:44.232351 systemd-logind[1497]: Removed session 7. Mar 21 12:40:52.123226 kubelet[2756]: I0321 12:40:52.123151 2756 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 21 12:40:52.123793 kubelet[2756]: I0321 12:40:52.123726 2756 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 21 12:40:52.123871 containerd[1520]: time="2025-03-21T12:40:52.123572309Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 21 12:40:52.948871 update_engine[1502]: I20250321 12:40:52.948796 1502 update_attempter.cc:509] Updating boot flags... Mar 21 12:40:52.991386 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2849) Mar 21 12:40:53.028162 kubelet[2756]: I0321 12:40:53.027887 2756 topology_manager.go:215] "Topology Admit Handler" podUID="da60a0dc-c56a-4b92-81a1-922b1c0aa12f" podNamespace="kube-system" podName="kube-proxy-j5c55" Mar 21 12:40:53.040370 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2851) Mar 21 12:40:53.080367 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2851) Mar 21 12:40:53.091154 kubelet[2756]: I0321 12:40:53.091015 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/da60a0dc-c56a-4b92-81a1-922b1c0aa12f-xtables-lock\") pod \"kube-proxy-j5c55\" (UID: \"da60a0dc-c56a-4b92-81a1-922b1c0aa12f\") " pod="kube-system/kube-proxy-j5c55" Mar 21 12:40:53.091154 kubelet[2756]: I0321 12:40:53.091051 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6nsqx\" (UniqueName: \"kubernetes.io/projected/da60a0dc-c56a-4b92-81a1-922b1c0aa12f-kube-api-access-6nsqx\") pod \"kube-proxy-j5c55\" (UID: \"da60a0dc-c56a-4b92-81a1-922b1c0aa12f\") " pod="kube-system/kube-proxy-j5c55" Mar 21 12:40:53.091154 kubelet[2756]: I0321 12:40:53.091068 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/da60a0dc-c56a-4b92-81a1-922b1c0aa12f-kube-proxy\") pod \"kube-proxy-j5c55\" (UID: \"da60a0dc-c56a-4b92-81a1-922b1c0aa12f\") " pod="kube-system/kube-proxy-j5c55" Mar 21 12:40:53.091154 kubelet[2756]: I0321 12:40:53.091081 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/da60a0dc-c56a-4b92-81a1-922b1c0aa12f-lib-modules\") pod \"kube-proxy-j5c55\" (UID: \"da60a0dc-c56a-4b92-81a1-922b1c0aa12f\") " pod="kube-system/kube-proxy-j5c55" Mar 21 12:40:53.119812 systemd[1]: Created slice kubepods-besteffort-podda60a0dc_c56a_4b92_81a1_922b1c0aa12f.slice - libcontainer container kubepods-besteffort-podda60a0dc_c56a_4b92_81a1_922b1c0aa12f.slice. Mar 21 12:40:53.140431 kubelet[2756]: I0321 12:40:53.139866 2756 topology_manager.go:215] "Topology Admit Handler" podUID="64e292eb-1ad2-418c-ad87-4904bd37ef2d" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-w88cq" Mar 21 12:40:53.155291 systemd[1]: Created slice kubepods-besteffort-pod64e292eb_1ad2_418c_ad87_4904bd37ef2d.slice - libcontainer container kubepods-besteffort-pod64e292eb_1ad2_418c_ad87_4904bd37ef2d.slice. Mar 21 12:40:53.191512 kubelet[2756]: I0321 12:40:53.191453 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cxv7g\" (UniqueName: \"kubernetes.io/projected/64e292eb-1ad2-418c-ad87-4904bd37ef2d-kube-api-access-cxv7g\") pod \"tigera-operator-6479d6dc54-w88cq\" (UID: \"64e292eb-1ad2-418c-ad87-4904bd37ef2d\") " pod="tigera-operator/tigera-operator-6479d6dc54-w88cq" Mar 21 12:40:53.191512 kubelet[2756]: I0321 12:40:53.191516 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/64e292eb-1ad2-418c-ad87-4904bd37ef2d-var-lib-calico\") pod \"tigera-operator-6479d6dc54-w88cq\" (UID: \"64e292eb-1ad2-418c-ad87-4904bd37ef2d\") " pod="tigera-operator/tigera-operator-6479d6dc54-w88cq" Mar 21 12:40:53.451823 containerd[1520]: time="2025-03-21T12:40:53.451785184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j5c55,Uid:da60a0dc-c56a-4b92-81a1-922b1c0aa12f,Namespace:kube-system,Attempt:0,}" Mar 21 12:40:53.458238 containerd[1520]: time="2025-03-21T12:40:53.458203888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-w88cq,Uid:64e292eb-1ad2-418c-ad87-4904bd37ef2d,Namespace:tigera-operator,Attempt:0,}" Mar 21 12:40:53.496234 containerd[1520]: time="2025-03-21T12:40:53.496150819Z" level=info msg="connecting to shim 4a3753a007888c8447379ae593a3a1b4dc1ccecdf548c4860d828513ddcbc5a4" address="unix:///run/containerd/s/8b74298d774d9d9012c805b01f163a970e83e909f00a209724ba00e713ee566d" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:40:53.497833 containerd[1520]: time="2025-03-21T12:40:53.497771807Z" level=info msg="connecting to shim 23728bead45a8ba815a365d68c6e6074640f8ccb95cf11bc2cc6c6d6ae572f13" address="unix:///run/containerd/s/a33342df9e6dad0a8e3f0fbaec77b024ffdefae73994fd24abbadf10da81ee09" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:40:53.544509 systemd[1]: Started cri-containerd-23728bead45a8ba815a365d68c6e6074640f8ccb95cf11bc2cc6c6d6ae572f13.scope - libcontainer container 23728bead45a8ba815a365d68c6e6074640f8ccb95cf11bc2cc6c6d6ae572f13. Mar 21 12:40:53.546023 systemd[1]: Started cri-containerd-4a3753a007888c8447379ae593a3a1b4dc1ccecdf548c4860d828513ddcbc5a4.scope - libcontainer container 4a3753a007888c8447379ae593a3a1b4dc1ccecdf548c4860d828513ddcbc5a4. Mar 21 12:40:53.596378 containerd[1520]: time="2025-03-21T12:40:53.596325032Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j5c55,Uid:da60a0dc-c56a-4b92-81a1-922b1c0aa12f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a3753a007888c8447379ae593a3a1b4dc1ccecdf548c4860d828513ddcbc5a4\"" Mar 21 12:40:53.598834 containerd[1520]: time="2025-03-21T12:40:53.598684436Z" level=info msg="CreateContainer within sandbox \"4a3753a007888c8447379ae593a3a1b4dc1ccecdf548c4860d828513ddcbc5a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 21 12:40:53.720828 containerd[1520]: time="2025-03-21T12:40:53.720698274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-w88cq,Uid:64e292eb-1ad2-418c-ad87-4904bd37ef2d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"23728bead45a8ba815a365d68c6e6074640f8ccb95cf11bc2cc6c6d6ae572f13\"" Mar 21 12:40:53.722303 containerd[1520]: time="2025-03-21T12:40:53.722215923Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 21 12:40:53.781243 containerd[1520]: time="2025-03-21T12:40:53.781210478Z" level=info msg="Container e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:40:53.790238 containerd[1520]: time="2025-03-21T12:40:53.790180094Z" level=info msg="CreateContainer within sandbox \"4a3753a007888c8447379ae593a3a1b4dc1ccecdf548c4860d828513ddcbc5a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90\"" Mar 21 12:40:53.790785 containerd[1520]: time="2025-03-21T12:40:53.790752318Z" level=info msg="StartContainer for \"e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90\"" Mar 21 12:40:53.792071 containerd[1520]: time="2025-03-21T12:40:53.791999503Z" level=info msg="connecting to shim e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90" address="unix:///run/containerd/s/8b74298d774d9d9012c805b01f163a970e83e909f00a209724ba00e713ee566d" protocol=ttrpc version=3 Mar 21 12:40:53.813444 systemd[1]: Started cri-containerd-e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90.scope - libcontainer container e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90. Mar 21 12:40:53.857483 containerd[1520]: time="2025-03-21T12:40:53.857438407Z" level=info msg="StartContainer for \"e8d304553481b81b5c345068220d58289b5aaedf10b54791b0d6d87e01c28f90\" returns successfully" Mar 21 12:40:54.671166 kubelet[2756]: I0321 12:40:54.671107 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j5c55" podStartSLOduration=1.671088787 podStartE2EDuration="1.671088787s" podCreationTimestamp="2025-03-21 12:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:40:54.670757057 +0000 UTC m=+15.136797081" watchObservedRunningTime="2025-03-21 12:40:54.671088787 +0000 UTC m=+15.137128781" Mar 21 12:40:55.330982 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3777622237.mount: Deactivated successfully. Mar 21 12:40:56.107679 containerd[1520]: time="2025-03-21T12:40:56.107612900Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:56.108478 containerd[1520]: time="2025-03-21T12:40:56.108441740Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 21 12:40:56.109720 containerd[1520]: time="2025-03-21T12:40:56.109656807Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:56.112027 containerd[1520]: time="2025-03-21T12:40:56.111979296Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:40:56.112604 containerd[1520]: time="2025-03-21T12:40:56.112568095Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 2.390322419s" Mar 21 12:40:56.112604 containerd[1520]: time="2025-03-21T12:40:56.112601674Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 21 12:40:56.114309 containerd[1520]: time="2025-03-21T12:40:56.114284408Z" level=info msg="CreateContainer within sandbox \"23728bead45a8ba815a365d68c6e6074640f8ccb95cf11bc2cc6c6d6ae572f13\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 21 12:40:56.122867 containerd[1520]: time="2025-03-21T12:40:56.122836941Z" level=info msg="Container ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:40:56.129761 containerd[1520]: time="2025-03-21T12:40:56.129725752Z" level=info msg="CreateContainer within sandbox \"23728bead45a8ba815a365d68c6e6074640f8ccb95cf11bc2cc6c6d6ae572f13\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3\"" Mar 21 12:40:56.130405 containerd[1520]: time="2025-03-21T12:40:56.130366390Z" level=info msg="StartContainer for \"ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3\"" Mar 21 12:40:56.131512 containerd[1520]: time="2025-03-21T12:40:56.131475335Z" level=info msg="connecting to shim ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3" address="unix:///run/containerd/s/a33342df9e6dad0a8e3f0fbaec77b024ffdefae73994fd24abbadf10da81ee09" protocol=ttrpc version=3 Mar 21 12:40:56.153538 systemd[1]: Started cri-containerd-ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3.scope - libcontainer container ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3. Mar 21 12:40:56.186278 containerd[1520]: time="2025-03-21T12:40:56.186241433Z" level=info msg="StartContainer for \"ef39b4152e75a5847604a23e8dc734928e40d86da0e45ab4d34c590026f8d8e3\" returns successfully" Mar 21 12:41:00.325552 kubelet[2756]: I0321 12:41:00.325475 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-w88cq" podStartSLOduration=4.9339717279999995 podStartE2EDuration="7.325457268s" podCreationTimestamp="2025-03-21 12:40:53 +0000 UTC" firstStartedPulling="2025-03-21 12:40:53.721809982 +0000 UTC m=+14.187849976" lastFinishedPulling="2025-03-21 12:40:56.113295522 +0000 UTC m=+16.579335516" observedRunningTime="2025-03-21 12:40:56.676027562 +0000 UTC m=+17.142067557" watchObservedRunningTime="2025-03-21 12:41:00.325457268 +0000 UTC m=+20.791497262" Mar 21 12:41:00.333072 kubelet[2756]: I0321 12:41:00.332364 2756 topology_manager.go:215] "Topology Admit Handler" podUID="d4ba9591-8c06-4759-8652-5777007b5f5c" podNamespace="calico-system" podName="calico-typha-7977665d8-khm4h" Mar 21 12:41:00.340891 systemd[1]: Created slice kubepods-besteffort-podd4ba9591_8c06_4759_8652_5777007b5f5c.slice - libcontainer container kubepods-besteffort-podd4ba9591_8c06_4759_8652_5777007b5f5c.slice. Mar 21 12:41:00.364383 kubelet[2756]: I0321 12:41:00.364319 2756 topology_manager.go:215] "Topology Admit Handler" podUID="a94fc11b-bfc7-4c29-9b61-a1f561b8bc96" podNamespace="calico-system" podName="calico-node-mzcn6" Mar 21 12:41:00.373949 systemd[1]: Created slice kubepods-besteffort-poda94fc11b_bfc7_4c29_9b61_a1f561b8bc96.slice - libcontainer container kubepods-besteffort-poda94fc11b_bfc7_4c29_9b61_a1f561b8bc96.slice. Mar 21 12:41:00.439144 kubelet[2756]: I0321 12:41:00.439103 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d4ba9591-8c06-4759-8652-5777007b5f5c-tigera-ca-bundle\") pod \"calico-typha-7977665d8-khm4h\" (UID: \"d4ba9591-8c06-4759-8652-5777007b5f5c\") " pod="calico-system/calico-typha-7977665d8-khm4h" Mar 21 12:41:00.439144 kubelet[2756]: I0321 12:41:00.439143 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljkb9\" (UniqueName: \"kubernetes.io/projected/d4ba9591-8c06-4759-8652-5777007b5f5c-kube-api-access-ljkb9\") pod \"calico-typha-7977665d8-khm4h\" (UID: \"d4ba9591-8c06-4759-8652-5777007b5f5c\") " pod="calico-system/calico-typha-7977665d8-khm4h" Mar 21 12:41:00.439144 kubelet[2756]: I0321 12:41:00.439161 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d4ba9591-8c06-4759-8652-5777007b5f5c-typha-certs\") pod \"calico-typha-7977665d8-khm4h\" (UID: \"d4ba9591-8c06-4759-8652-5777007b5f5c\") " pod="calico-system/calico-typha-7977665d8-khm4h" Mar 21 12:41:00.475612 kubelet[2756]: I0321 12:41:00.475558 2756 topology_manager.go:215] "Topology Admit Handler" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" podNamespace="calico-system" podName="csi-node-driver-w4l5h" Mar 21 12:41:00.475844 kubelet[2756]: E0321 12:41:00.475801 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:00.541517 kubelet[2756]: I0321 12:41:00.541470 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-xtables-lock\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541517 kubelet[2756]: I0321 12:41:00.541513 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-var-run-calico\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541680 kubelet[2756]: I0321 12:41:00.541546 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-var-lib-calico\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541680 kubelet[2756]: I0321 12:41:00.541563 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-node-certs\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541680 kubelet[2756]: I0321 12:41:00.541577 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-cni-bin-dir\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541680 kubelet[2756]: I0321 12:41:00.541617 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-policysync\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541680 kubelet[2756]: I0321 12:41:00.541664 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqfxk\" (UniqueName: \"kubernetes.io/projected/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-kube-api-access-nqfxk\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541805 kubelet[2756]: I0321 12:41:00.541702 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-cni-log-dir\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541805 kubelet[2756]: I0321 12:41:00.541794 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-lib-modules\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541917 kubelet[2756]: I0321 12:41:00.541809 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-tigera-ca-bundle\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541917 kubelet[2756]: I0321 12:41:00.541844 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-cni-net-dir\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.541917 kubelet[2756]: I0321 12:41:00.541868 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/a94fc11b-bfc7-4c29-9b61-a1f561b8bc96-flexvol-driver-host\") pod \"calico-node-mzcn6\" (UID: \"a94fc11b-bfc7-4c29-9b61-a1f561b8bc96\") " pod="calico-system/calico-node-mzcn6" Mar 21 12:41:00.642646 kubelet[2756]: I0321 12:41:00.642437 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e815f45-8221-492c-b826-aef1cf581aeb-registration-dir\") pod \"csi-node-driver-w4l5h\" (UID: \"0e815f45-8221-492c-b826-aef1cf581aeb\") " pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:00.642646 kubelet[2756]: I0321 12:41:00.642477 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e815f45-8221-492c-b826-aef1cf581aeb-varrun\") pod \"csi-node-driver-w4l5h\" (UID: \"0e815f45-8221-492c-b826-aef1cf581aeb\") " pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:00.642646 kubelet[2756]: I0321 12:41:00.642494 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e815f45-8221-492c-b826-aef1cf581aeb-socket-dir\") pod \"csi-node-driver-w4l5h\" (UID: \"0e815f45-8221-492c-b826-aef1cf581aeb\") " pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:00.642864 kubelet[2756]: I0321 12:41:00.642675 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hqknc\" (UniqueName: \"kubernetes.io/projected/0e815f45-8221-492c-b826-aef1cf581aeb-kube-api-access-hqknc\") pod \"csi-node-driver-w4l5h\" (UID: \"0e815f45-8221-492c-b826-aef1cf581aeb\") " pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:00.642864 kubelet[2756]: I0321 12:41:00.642712 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e815f45-8221-492c-b826-aef1cf581aeb-kubelet-dir\") pod \"csi-node-driver-w4l5h\" (UID: \"0e815f45-8221-492c-b826-aef1cf581aeb\") " pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:00.647609 kubelet[2756]: E0321 12:41:00.647575 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.647609 kubelet[2756]: W0321 12:41:00.647596 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.647609 kubelet[2756]: E0321 12:41:00.647611 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.649527 containerd[1520]: time="2025-03-21T12:41:00.649440973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7977665d8-khm4h,Uid:d4ba9591-8c06-4759-8652-5777007b5f5c,Namespace:calico-system,Attempt:0,}" Mar 21 12:41:00.650549 kubelet[2756]: E0321 12:41:00.650507 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.650549 kubelet[2756]: W0321 12:41:00.650518 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.650549 kubelet[2756]: E0321 12:41:00.650528 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.670381 containerd[1520]: time="2025-03-21T12:41:00.670345787Z" level=info msg="connecting to shim 7436dfb30f4b4f46270e00a4b417489843503a11f274d3e39efb522a68ae0283" address="unix:///run/containerd/s/0a352f71885d8452118ffe08f1c7997a40b69f20ec70c435eea8dfb46893e7e1" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:00.677346 containerd[1520]: time="2025-03-21T12:41:00.677112040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mzcn6,Uid:a94fc11b-bfc7-4c29-9b61-a1f561b8bc96,Namespace:calico-system,Attempt:0,}" Mar 21 12:41:00.695595 systemd[1]: Started cri-containerd-7436dfb30f4b4f46270e00a4b417489843503a11f274d3e39efb522a68ae0283.scope - libcontainer container 7436dfb30f4b4f46270e00a4b417489843503a11f274d3e39efb522a68ae0283. Mar 21 12:41:00.702151 containerd[1520]: time="2025-03-21T12:41:00.702102196Z" level=info msg="connecting to shim c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501" address="unix:///run/containerd/s/e3ab5c1b2a466456842c25f624e5056e9bf8a507086573fe07e67bd8baf783ed" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:00.723480 systemd[1]: Started cri-containerd-c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501.scope - libcontainer container c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501. Mar 21 12:41:00.743319 kubelet[2756]: E0321 12:41:00.743287 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.743319 kubelet[2756]: W0321 12:41:00.743308 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.743500 kubelet[2756]: E0321 12:41:00.743343 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.743565 kubelet[2756]: E0321 12:41:00.743556 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.743605 kubelet[2756]: W0321 12:41:00.743566 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.743605 kubelet[2756]: E0321 12:41:00.743578 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.743842 kubelet[2756]: E0321 12:41:00.743815 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.743842 kubelet[2756]: W0321 12:41:00.743827 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.743842 kubelet[2756]: E0321 12:41:00.743839 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.744089 kubelet[2756]: E0321 12:41:00.744074 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.744089 kubelet[2756]: W0321 12:41:00.744085 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.744206 kubelet[2756]: E0321 12:41:00.744106 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.744660 kubelet[2756]: E0321 12:41:00.744554 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.744660 kubelet[2756]: W0321 12:41:00.744568 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.744660 kubelet[2756]: E0321 12:41:00.744580 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.745190 containerd[1520]: time="2025-03-21T12:41:00.745120995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7977665d8-khm4h,Uid:d4ba9591-8c06-4759-8652-5777007b5f5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7436dfb30f4b4f46270e00a4b417489843503a11f274d3e39efb522a68ae0283\"" Mar 21 12:41:00.745487 kubelet[2756]: E0321 12:41:00.745443 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.745654 kubelet[2756]: W0321 12:41:00.745464 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.745654 kubelet[2756]: E0321 12:41:00.745580 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.746223 kubelet[2756]: E0321 12:41:00.746209 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.746467 kubelet[2756]: W0321 12:41:00.746285 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.746467 kubelet[2756]: E0321 12:41:00.746455 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.746632 kubelet[2756]: E0321 12:41:00.746619 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.746798 kubelet[2756]: W0321 12:41:00.746686 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.746798 kubelet[2756]: E0321 12:41:00.746720 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.746949 kubelet[2756]: E0321 12:41:00.746924 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.746949 kubelet[2756]: W0321 12:41:00.746937 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.747030 kubelet[2756]: E0321 12:41:00.747015 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.747227 kubelet[2756]: E0321 12:41:00.747212 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.747227 kubelet[2756]: W0321 12:41:00.747223 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.747316 kubelet[2756]: E0321 12:41:00.747298 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.747507 kubelet[2756]: E0321 12:41:00.747483 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.747507 kubelet[2756]: W0321 12:41:00.747494 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.747585 kubelet[2756]: E0321 12:41:00.747570 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.747723 kubelet[2756]: E0321 12:41:00.747709 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.747723 kubelet[2756]: W0321 12:41:00.747719 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.747790 kubelet[2756]: E0321 12:41:00.747740 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.748361 kubelet[2756]: E0321 12:41:00.748231 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.748361 kubelet[2756]: W0321 12:41:00.748243 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.748361 kubelet[2756]: E0321 12:41:00.748282 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.749154 kubelet[2756]: E0321 12:41:00.749137 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.749154 kubelet[2756]: W0321 12:41:00.749151 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.749307 kubelet[2756]: E0321 12:41:00.749235 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.749487 kubelet[2756]: E0321 12:41:00.749470 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.749487 kubelet[2756]: W0321 12:41:00.749483 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.749719 kubelet[2756]: E0321 12:41:00.749703 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.749719 kubelet[2756]: W0321 12:41:00.749715 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.749912 kubelet[2756]: E0321 12:41:00.749893 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.749948 kubelet[2756]: E0321 12:41:00.749920 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.749971 kubelet[2756]: E0321 12:41:00.749959 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.749992 kubelet[2756]: W0321 12:41:00.749976 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.750016 kubelet[2756]: E0321 12:41:00.749991 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.750488 kubelet[2756]: E0321 12:41:00.750471 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.750488 kubelet[2756]: W0321 12:41:00.750484 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.750570 kubelet[2756]: E0321 12:41:00.750500 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.750698 kubelet[2756]: E0321 12:41:00.750682 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.750698 kubelet[2756]: W0321 12:41:00.750694 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.750794 kubelet[2756]: E0321 12:41:00.750703 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.750925 kubelet[2756]: E0321 12:41:00.750903 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.750925 kubelet[2756]: W0321 12:41:00.750915 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.750925 kubelet[2756]: E0321 12:41:00.750923 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.751157 kubelet[2756]: E0321 12:41:00.751142 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.751157 kubelet[2756]: W0321 12:41:00.751153 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.751206 kubelet[2756]: E0321 12:41:00.751161 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.751428 kubelet[2756]: E0321 12:41:00.751381 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.751428 kubelet[2756]: W0321 12:41:00.751395 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.751428 kubelet[2756]: E0321 12:41:00.751403 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.752450 kubelet[2756]: E0321 12:41:00.752436 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.752508 kubelet[2756]: W0321 12:41:00.752497 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.752555 kubelet[2756]: E0321 12:41:00.752545 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.752954 kubelet[2756]: E0321 12:41:00.752880 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.752954 kubelet[2756]: W0321 12:41:00.752890 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.752954 kubelet[2756]: E0321 12:41:00.752899 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.755854 kubelet[2756]: E0321 12:41:00.755842 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.755993 kubelet[2756]: W0321 12:41:00.755908 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.755993 kubelet[2756]: E0321 12:41:00.755921 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:00.756628 containerd[1520]: time="2025-03-21T12:41:00.756597102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mzcn6,Uid:a94fc11b-bfc7-4c29-9b61-a1f561b8bc96,Namespace:calico-system,Attempt:0,} returns sandbox id \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\"" Mar 21 12:41:00.763954 containerd[1520]: time="2025-03-21T12:41:00.763828209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 21 12:41:00.764924 kubelet[2756]: E0321 12:41:00.764906 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:00.764924 kubelet[2756]: W0321 12:41:00.764923 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:00.765011 kubelet[2756]: E0321 12:41:00.764941 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:02.624526 kubelet[2756]: E0321 12:41:02.624471 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:03.045502 containerd[1520]: time="2025-03-21T12:41:03.045364306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:03.046238 containerd[1520]: time="2025-03-21T12:41:03.046186014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 21 12:41:03.047353 containerd[1520]: time="2025-03-21T12:41:03.047309004Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:03.049313 containerd[1520]: time="2025-03-21T12:41:03.049267801Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:03.049869 containerd[1520]: time="2025-03-21T12:41:03.049831757Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 2.285968135s" Mar 21 12:41:03.049912 containerd[1520]: time="2025-03-21T12:41:03.049869353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 21 12:41:03.051275 containerd[1520]: time="2025-03-21T12:41:03.051243924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 21 12:41:03.060902 containerd[1520]: time="2025-03-21T12:41:03.060862826Z" level=info msg="CreateContainer within sandbox \"7436dfb30f4b4f46270e00a4b417489843503a11f274d3e39efb522a68ae0283\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 21 12:41:03.069447 containerd[1520]: time="2025-03-21T12:41:03.069393898Z" level=info msg="Container e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:03.077024 containerd[1520]: time="2025-03-21T12:41:03.076990352Z" level=info msg="CreateContainer within sandbox \"7436dfb30f4b4f46270e00a4b417489843503a11f274d3e39efb522a68ae0283\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999\"" Mar 21 12:41:03.077389 containerd[1520]: time="2025-03-21T12:41:03.077350303Z" level=info msg="StartContainer for \"e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999\"" Mar 21 12:41:03.078437 containerd[1520]: time="2025-03-21T12:41:03.078365725Z" level=info msg="connecting to shim e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999" address="unix:///run/containerd/s/0a352f71885d8452118ffe08f1c7997a40b69f20ec70c435eea8dfb46893e7e1" protocol=ttrpc version=3 Mar 21 12:41:03.102492 systemd[1]: Started cri-containerd-e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999.scope - libcontainer container e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999. Mar 21 12:41:03.308157 containerd[1520]: time="2025-03-21T12:41:03.308036409Z" level=info msg="StartContainer for \"e0fd1e22d0f83020c9a0547feebc3a34d3999341b1259ec2e3491b6e6e785999\" returns successfully" Mar 21 12:41:03.694557 kubelet[2756]: I0321 12:41:03.694492 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7977665d8-khm4h" podStartSLOduration=1.407463109 podStartE2EDuration="3.694476352s" podCreationTimestamp="2025-03-21 12:41:00 +0000 UTC" firstStartedPulling="2025-03-21 12:41:00.763616895 +0000 UTC m=+21.229656889" lastFinishedPulling="2025-03-21 12:41:03.050630148 +0000 UTC m=+23.516670132" observedRunningTime="2025-03-21 12:41:03.69408117 +0000 UTC m=+24.160121164" watchObservedRunningTime="2025-03-21 12:41:03.694476352 +0000 UTC m=+24.160516346" Mar 21 12:41:03.764302 kubelet[2756]: E0321 12:41:03.764255 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.764302 kubelet[2756]: W0321 12:41:03.764287 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.764465 kubelet[2756]: E0321 12:41:03.764311 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.764602 kubelet[2756]: E0321 12:41:03.764574 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.764602 kubelet[2756]: W0321 12:41:03.764589 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.764602 kubelet[2756]: E0321 12:41:03.764600 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.764841 kubelet[2756]: E0321 12:41:03.764825 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.764841 kubelet[2756]: W0321 12:41:03.764838 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.764899 kubelet[2756]: E0321 12:41:03.764848 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.765085 kubelet[2756]: E0321 12:41:03.765071 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.765085 kubelet[2756]: W0321 12:41:03.765083 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.765144 kubelet[2756]: E0321 12:41:03.765093 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.765362 kubelet[2756]: E0321 12:41:03.765347 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.765362 kubelet[2756]: W0321 12:41:03.765360 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.765415 kubelet[2756]: E0321 12:41:03.765370 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.765616 kubelet[2756]: E0321 12:41:03.765599 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.765616 kubelet[2756]: W0321 12:41:03.765614 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.765670 kubelet[2756]: E0321 12:41:03.765625 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.765858 kubelet[2756]: E0321 12:41:03.765842 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.765858 kubelet[2756]: W0321 12:41:03.765855 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.765908 kubelet[2756]: E0321 12:41:03.765868 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.766116 kubelet[2756]: E0321 12:41:03.766101 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.766116 kubelet[2756]: W0321 12:41:03.766113 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.766164 kubelet[2756]: E0321 12:41:03.766123 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.766366 kubelet[2756]: E0321 12:41:03.766351 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.766366 kubelet[2756]: W0321 12:41:03.766363 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.766432 kubelet[2756]: E0321 12:41:03.766373 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.766590 kubelet[2756]: E0321 12:41:03.766576 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.766590 kubelet[2756]: W0321 12:41:03.766588 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.766635 kubelet[2756]: E0321 12:41:03.766600 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.766822 kubelet[2756]: E0321 12:41:03.766804 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.766822 kubelet[2756]: W0321 12:41:03.766819 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.766879 kubelet[2756]: E0321 12:41:03.766830 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.767048 kubelet[2756]: E0321 12:41:03.767032 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.767074 kubelet[2756]: W0321 12:41:03.767046 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.767074 kubelet[2756]: E0321 12:41:03.767057 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.767283 kubelet[2756]: E0321 12:41:03.767266 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.767283 kubelet[2756]: W0321 12:41:03.767280 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.767359 kubelet[2756]: E0321 12:41:03.767291 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.767549 kubelet[2756]: E0321 12:41:03.767533 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.767549 kubelet[2756]: W0321 12:41:03.767546 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.767595 kubelet[2756]: E0321 12:41:03.767556 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.767770 kubelet[2756]: E0321 12:41:03.767756 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.767770 kubelet[2756]: W0321 12:41:03.767768 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.767825 kubelet[2756]: E0321 12:41:03.767778 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.768073 kubelet[2756]: E0321 12:41:03.768057 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.768073 kubelet[2756]: W0321 12:41:03.768070 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.768138 kubelet[2756]: E0321 12:41:03.768080 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.768343 kubelet[2756]: E0321 12:41:03.768314 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.768382 kubelet[2756]: W0321 12:41:03.768344 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.768382 kubelet[2756]: E0321 12:41:03.768362 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.768704 kubelet[2756]: E0321 12:41:03.768678 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.768704 kubelet[2756]: W0321 12:41:03.768698 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.768750 kubelet[2756]: E0321 12:41:03.768722 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.768926 kubelet[2756]: E0321 12:41:03.768909 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.768926 kubelet[2756]: W0321 12:41:03.768918 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.768988 kubelet[2756]: E0321 12:41:03.768930 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.769141 kubelet[2756]: E0321 12:41:03.769125 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.769141 kubelet[2756]: W0321 12:41:03.769134 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.769189 kubelet[2756]: E0321 12:41:03.769148 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.769408 kubelet[2756]: E0321 12:41:03.769397 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.769442 kubelet[2756]: W0321 12:41:03.769407 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.769442 kubelet[2756]: E0321 12:41:03.769419 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.769734 kubelet[2756]: E0321 12:41:03.769716 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.769734 kubelet[2756]: W0321 12:41:03.769732 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.769790 kubelet[2756]: E0321 12:41:03.769749 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.769974 kubelet[2756]: E0321 12:41:03.769956 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.769974 kubelet[2756]: W0321 12:41:03.769967 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.770024 kubelet[2756]: E0321 12:41:03.769978 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.770187 kubelet[2756]: E0321 12:41:03.770171 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.770213 kubelet[2756]: W0321 12:41:03.770186 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.770213 kubelet[2756]: E0321 12:41:03.770207 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.770493 kubelet[2756]: E0321 12:41:03.770475 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.770493 kubelet[2756]: W0321 12:41:03.770490 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.770557 kubelet[2756]: E0321 12:41:03.770509 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.770775 kubelet[2756]: E0321 12:41:03.770751 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.770775 kubelet[2756]: W0321 12:41:03.770767 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.770821 kubelet[2756]: E0321 12:41:03.770786 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.771033 kubelet[2756]: E0321 12:41:03.771019 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.771054 kubelet[2756]: W0321 12:41:03.771032 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.771054 kubelet[2756]: E0321 12:41:03.771045 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.771213 kubelet[2756]: E0321 12:41:03.771203 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.771213 kubelet[2756]: W0321 12:41:03.771211 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.771268 kubelet[2756]: E0321 12:41:03.771223 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.771470 kubelet[2756]: E0321 12:41:03.771459 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.771470 kubelet[2756]: W0321 12:41:03.771469 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.771524 kubelet[2756]: E0321 12:41:03.771479 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.771676 kubelet[2756]: E0321 12:41:03.771666 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.771676 kubelet[2756]: W0321 12:41:03.771675 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.771842 kubelet[2756]: E0321 12:41:03.771687 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.771918 kubelet[2756]: E0321 12:41:03.771900 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.771943 kubelet[2756]: W0321 12:41:03.771917 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.771943 kubelet[2756]: E0321 12:41:03.771937 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.772139 kubelet[2756]: E0321 12:41:03.772125 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.772139 kubelet[2756]: W0321 12:41:03.772137 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.772192 kubelet[2756]: E0321 12:41:03.772149 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:03.772444 kubelet[2756]: E0321 12:41:03.772427 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:03.772482 kubelet[2756]: W0321 12:41:03.772443 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:03.772482 kubelet[2756]: E0321 12:41:03.772464 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.624600 kubelet[2756]: E0321 12:41:04.624533 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:04.683998 kubelet[2756]: I0321 12:41:04.683956 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:04.775174 kubelet[2756]: E0321 12:41:04.775133 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.775174 kubelet[2756]: W0321 12:41:04.775160 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.775174 kubelet[2756]: E0321 12:41:04.775182 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.775859 kubelet[2756]: E0321 12:41:04.775484 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.775859 kubelet[2756]: W0321 12:41:04.775510 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.775859 kubelet[2756]: E0321 12:41:04.775539 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.775859 kubelet[2756]: E0321 12:41:04.775785 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.775859 kubelet[2756]: W0321 12:41:04.775797 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.775859 kubelet[2756]: E0321 12:41:04.775809 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.776074 kubelet[2756]: E0321 12:41:04.776052 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.776074 kubelet[2756]: W0321 12:41:04.776065 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.776074 kubelet[2756]: E0321 12:41:04.776074 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.776312 kubelet[2756]: E0321 12:41:04.776283 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.776312 kubelet[2756]: W0321 12:41:04.776297 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.776312 kubelet[2756]: E0321 12:41:04.776305 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.776620 kubelet[2756]: E0321 12:41:04.776589 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.776620 kubelet[2756]: W0321 12:41:04.776606 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.776620 kubelet[2756]: E0321 12:41:04.776619 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.776851 kubelet[2756]: E0321 12:41:04.776837 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.776851 kubelet[2756]: W0321 12:41:04.776846 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.776851 kubelet[2756]: E0321 12:41:04.776854 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.777046 kubelet[2756]: E0321 12:41:04.777032 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.777046 kubelet[2756]: W0321 12:41:04.777040 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.777118 kubelet[2756]: E0321 12:41:04.777050 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.777319 kubelet[2756]: E0321 12:41:04.777298 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.777319 kubelet[2756]: W0321 12:41:04.777315 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.777428 kubelet[2756]: E0321 12:41:04.777348 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.777612 kubelet[2756]: E0321 12:41:04.777582 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.777612 kubelet[2756]: W0321 12:41:04.777598 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.777612 kubelet[2756]: E0321 12:41:04.777608 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.777818 kubelet[2756]: E0321 12:41:04.777794 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.777818 kubelet[2756]: W0321 12:41:04.777810 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.777868 kubelet[2756]: E0321 12:41:04.777821 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.778093 kubelet[2756]: E0321 12:41:04.778069 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.778093 kubelet[2756]: W0321 12:41:04.778085 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.778149 kubelet[2756]: E0321 12:41:04.778096 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.778363 kubelet[2756]: E0321 12:41:04.778326 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.778363 kubelet[2756]: W0321 12:41:04.778357 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.778421 kubelet[2756]: E0321 12:41:04.778366 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.778589 kubelet[2756]: E0321 12:41:04.778554 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.778589 kubelet[2756]: W0321 12:41:04.778584 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.778643 kubelet[2756]: E0321 12:41:04.778596 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.778786 kubelet[2756]: E0321 12:41:04.778771 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.778786 kubelet[2756]: W0321 12:41:04.778782 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.778856 kubelet[2756]: E0321 12:41:04.778792 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.779091 kubelet[2756]: E0321 12:41:04.779073 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.779091 kubelet[2756]: W0321 12:41:04.779088 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.779147 kubelet[2756]: E0321 12:41:04.779099 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.779362 kubelet[2756]: E0321 12:41:04.779326 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.779362 kubelet[2756]: W0321 12:41:04.779360 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.779422 kubelet[2756]: E0321 12:41:04.779377 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.779704 kubelet[2756]: E0321 12:41:04.779688 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.779704 kubelet[2756]: W0321 12:41:04.779703 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.779762 kubelet[2756]: E0321 12:41:04.779723 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.779952 kubelet[2756]: E0321 12:41:04.779934 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.779991 kubelet[2756]: W0321 12:41:04.779952 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.779991 kubelet[2756]: E0321 12:41:04.779972 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.780224 kubelet[2756]: E0321 12:41:04.780208 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.780224 kubelet[2756]: W0321 12:41:04.780221 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.780305 kubelet[2756]: E0321 12:41:04.780246 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.780542 kubelet[2756]: E0321 12:41:04.780528 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.780542 kubelet[2756]: W0321 12:41:04.780540 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.780628 kubelet[2756]: E0321 12:41:04.780559 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.780809 kubelet[2756]: E0321 12:41:04.780795 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.780809 kubelet[2756]: W0321 12:41:04.780806 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.780863 kubelet[2756]: E0321 12:41:04.780839 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.781046 kubelet[2756]: E0321 12:41:04.781030 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.781046 kubelet[2756]: W0321 12:41:04.781042 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.781114 kubelet[2756]: E0321 12:41:04.781076 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.781295 kubelet[2756]: E0321 12:41:04.781267 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.781295 kubelet[2756]: W0321 12:41:04.781277 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.781295 kubelet[2756]: E0321 12:41:04.781292 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.781560 kubelet[2756]: E0321 12:41:04.781539 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.781560 kubelet[2756]: W0321 12:41:04.781555 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.781652 kubelet[2756]: E0321 12:41:04.781582 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.781872 kubelet[2756]: E0321 12:41:04.781852 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.781872 kubelet[2756]: W0321 12:41:04.781869 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.781942 kubelet[2756]: E0321 12:41:04.781886 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.782136 kubelet[2756]: E0321 12:41:04.782115 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.782136 kubelet[2756]: W0321 12:41:04.782130 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.782207 kubelet[2756]: E0321 12:41:04.782150 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.782474 kubelet[2756]: E0321 12:41:04.782454 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.782474 kubelet[2756]: W0321 12:41:04.782465 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.782540 kubelet[2756]: E0321 12:41:04.782478 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.782897 kubelet[2756]: E0321 12:41:04.782878 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.782897 kubelet[2756]: W0321 12:41:04.782894 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.782961 kubelet[2756]: E0321 12:41:04.782914 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.783169 kubelet[2756]: E0321 12:41:04.783145 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.783198 kubelet[2756]: W0321 12:41:04.783161 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.783198 kubelet[2756]: E0321 12:41:04.783192 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.783481 kubelet[2756]: E0321 12:41:04.783467 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.783481 kubelet[2756]: W0321 12:41:04.783479 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.783538 kubelet[2756]: E0321 12:41:04.783493 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.783741 kubelet[2756]: E0321 12:41:04.783729 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.783741 kubelet[2756]: W0321 12:41:04.783739 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.783795 kubelet[2756]: E0321 12:41:04.783749 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:04.784322 kubelet[2756]: E0321 12:41:04.784303 2756 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:41:04.784406 kubelet[2756]: W0321 12:41:04.784321 2756 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:41:04.784406 kubelet[2756]: E0321 12:41:04.784383 2756 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:41:05.563673 containerd[1520]: time="2025-03-21T12:41:05.563594422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:05.564962 containerd[1520]: time="2025-03-21T12:41:05.564883102Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 21 12:41:05.566393 containerd[1520]: time="2025-03-21T12:41:05.566364723Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:05.568856 containerd[1520]: time="2025-03-21T12:41:05.568787623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:05.569272 containerd[1520]: time="2025-03-21T12:41:05.569246620Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 2.517967414s" Mar 21 12:41:05.569318 containerd[1520]: time="2025-03-21T12:41:05.569277883Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 21 12:41:05.571844 containerd[1520]: time="2025-03-21T12:41:05.571809372Z" level=info msg="CreateContainer within sandbox \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 21 12:41:05.582105 containerd[1520]: time="2025-03-21T12:41:05.582029059Z" level=info msg="Container e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:05.596233 containerd[1520]: time="2025-03-21T12:41:05.596180062Z" level=info msg="CreateContainer within sandbox \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\"" Mar 21 12:41:05.596782 containerd[1520]: time="2025-03-21T12:41:05.596756786Z" level=info msg="StartContainer for \"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\"" Mar 21 12:41:05.598673 containerd[1520]: time="2025-03-21T12:41:05.598622592Z" level=info msg="connecting to shim e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c" address="unix:///run/containerd/s/e3ab5c1b2a466456842c25f624e5056e9bf8a507086573fe07e67bd8baf783ed" protocol=ttrpc version=3 Mar 21 12:41:05.619509 systemd[1]: Started cri-containerd-e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c.scope - libcontainer container e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c. Mar 21 12:41:05.663995 containerd[1520]: time="2025-03-21T12:41:05.663951932Z" level=info msg="StartContainer for \"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\" returns successfully" Mar 21 12:41:05.674305 systemd[1]: cri-containerd-e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c.scope: Deactivated successfully. Mar 21 12:41:05.676165 containerd[1520]: time="2025-03-21T12:41:05.676120814Z" level=info msg="received exit event container_id:\"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\" id:\"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\" pid:3406 exited_at:{seconds:1742560865 nanos:675788182}" Mar 21 12:41:05.676310 containerd[1520]: time="2025-03-21T12:41:05.676122036Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\" id:\"e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c\" pid:3406 exited_at:{seconds:1742560865 nanos:675788182}" Mar 21 12:41:05.700724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e2fed34a06cb424363b307b599e11f684302b2071d2f64df807e9aee0f32029c-rootfs.mount: Deactivated successfully. Mar 21 12:41:06.623720 kubelet[2756]: E0321 12:41:06.623657 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:06.692272 containerd[1520]: time="2025-03-21T12:41:06.692231060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 21 12:41:07.097988 systemd[1]: Started sshd@7-10.0.0.131:22-10.0.0.1:42092.service - OpenSSH per-connection server daemon (10.0.0.1:42092). Mar 21 12:41:07.151802 sshd[3447]: Accepted publickey for core from 10.0.0.1 port 42092 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:07.153438 sshd-session[3447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:07.158006 systemd-logind[1497]: New session 8 of user core. Mar 21 12:41:07.166514 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 21 12:41:07.283477 sshd[3449]: Connection closed by 10.0.0.1 port 42092 Mar 21 12:41:07.283782 sshd-session[3447]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:07.287603 systemd[1]: sshd@7-10.0.0.131:22-10.0.0.1:42092.service: Deactivated successfully. Mar 21 12:41:07.289536 systemd[1]: session-8.scope: Deactivated successfully. Mar 21 12:41:07.290162 systemd-logind[1497]: Session 8 logged out. Waiting for processes to exit. Mar 21 12:41:07.290920 systemd-logind[1497]: Removed session 8. Mar 21 12:41:08.623973 kubelet[2756]: E0321 12:41:08.623919 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:09.678576 containerd[1520]: time="2025-03-21T12:41:09.678516413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:09.679226 containerd[1520]: time="2025-03-21T12:41:09.679164569Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 21 12:41:09.680178 containerd[1520]: time="2025-03-21T12:41:09.680135558Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:09.683011 containerd[1520]: time="2025-03-21T12:41:09.682970224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:09.683560 containerd[1520]: time="2025-03-21T12:41:09.683516475Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 2.991247909s" Mar 21 12:41:09.683560 containerd[1520]: time="2025-03-21T12:41:09.683543058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 21 12:41:09.685579 containerd[1520]: time="2025-03-21T12:41:09.685502143Z" level=info msg="CreateContainer within sandbox \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 21 12:41:09.693582 containerd[1520]: time="2025-03-21T12:41:09.693528443Z" level=info msg="Container 912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:09.706949 containerd[1520]: time="2025-03-21T12:41:09.706903171Z" level=info msg="CreateContainer within sandbox \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\"" Mar 21 12:41:09.707428 containerd[1520]: time="2025-03-21T12:41:09.707395084Z" level=info msg="StartContainer for \"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\"" Mar 21 12:41:09.708746 containerd[1520]: time="2025-03-21T12:41:09.708716504Z" level=info msg="connecting to shim 912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2" address="unix:///run/containerd/s/e3ab5c1b2a466456842c25f624e5056e9bf8a507086573fe07e67bd8baf783ed" protocol=ttrpc version=3 Mar 21 12:41:09.728483 systemd[1]: Started cri-containerd-912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2.scope - libcontainer container 912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2. Mar 21 12:41:09.875069 containerd[1520]: time="2025-03-21T12:41:09.875010024Z" level=info msg="StartContainer for \"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\" returns successfully" Mar 21 12:41:10.624579 kubelet[2756]: E0321 12:41:10.624517 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:11.002069 systemd[1]: cri-containerd-912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2.scope: Deactivated successfully. Mar 21 12:41:11.002724 systemd[1]: cri-containerd-912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2.scope: Consumed 584ms CPU time, 159M memory peak, 8K read from disk, 154M written to disk. Mar 21 12:41:11.003153 containerd[1520]: time="2025-03-21T12:41:11.003107582Z" level=info msg="received exit event container_id:\"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\" id:\"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\" pid:3485 exited_at:{seconds:1742560871 nanos:2800552}" Mar 21 12:41:11.003507 containerd[1520]: time="2025-03-21T12:41:11.003148845Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\" id:\"912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2\" pid:3485 exited_at:{seconds:1742560871 nanos:2800552}" Mar 21 12:41:11.028630 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-912f8df3ec58eef72b9af60ac024a8a1d0893592498b543609a71f631dd47cc2-rootfs.mount: Deactivated successfully. Mar 21 12:41:11.029535 kubelet[2756]: I0321 12:41:11.029256 2756 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 21 12:41:11.063934 kubelet[2756]: I0321 12:41:11.063735 2756 topology_manager.go:215] "Topology Admit Handler" podUID="6b46fde4-64a4-4f71-93be-1fefe5acc154" podNamespace="kube-system" podName="coredns-7db6d8ff4d-g77jd" Mar 21 12:41:11.067837 kubelet[2756]: I0321 12:41:11.067795 2756 topology_manager.go:215] "Topology Admit Handler" podUID="b7f90655-41a0-4696-86f4-6b686e37641e" podNamespace="calico-apiserver" podName="calico-apiserver-759b875587-qqd6m" Mar 21 12:41:11.067995 kubelet[2756]: I0321 12:41:11.067984 2756 topology_manager.go:215] "Topology Admit Handler" podUID="41b7c426-c55c-460c-be3a-e599d348f902" podNamespace="calico-apiserver" podName="calico-apiserver-759b875587-752vj" Mar 21 12:41:11.069851 kubelet[2756]: I0321 12:41:11.069820 2756 topology_manager.go:215] "Topology Admit Handler" podUID="0c69e1ad-ffb2-41ac-95f4-406788645a5d" podNamespace="calico-system" podName="calico-kube-controllers-78d6c95c47-bvbbf" Mar 21 12:41:11.070034 kubelet[2756]: I0321 12:41:11.069963 2756 topology_manager.go:215] "Topology Admit Handler" podUID="7da0b772-0c89-4e27-8a75-aac98747e6ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tvb4m" Mar 21 12:41:11.081279 systemd[1]: Created slice kubepods-burstable-pod6b46fde4_64a4_4f71_93be_1fefe5acc154.slice - libcontainer container kubepods-burstable-pod6b46fde4_64a4_4f71_93be_1fefe5acc154.slice. Mar 21 12:41:11.087757 systemd[1]: Created slice kubepods-besteffort-pod41b7c426_c55c_460c_be3a_e599d348f902.slice - libcontainer container kubepods-besteffort-pod41b7c426_c55c_460c_be3a_e599d348f902.slice. Mar 21 12:41:11.093206 systemd[1]: Created slice kubepods-besteffort-podb7f90655_41a0_4696_86f4_6b686e37641e.slice - libcontainer container kubepods-besteffort-podb7f90655_41a0_4696_86f4_6b686e37641e.slice. Mar 21 12:41:11.098448 systemd[1]: Created slice kubepods-burstable-pod7da0b772_0c89_4e27_8a75_aac98747e6ec.slice - libcontainer container kubepods-burstable-pod7da0b772_0c89_4e27_8a75_aac98747e6ec.slice. Mar 21 12:41:11.103541 systemd[1]: Created slice kubepods-besteffort-pod0c69e1ad_ffb2_41ac_95f4_406788645a5d.slice - libcontainer container kubepods-besteffort-pod0c69e1ad_ffb2_41ac_95f4_406788645a5d.slice. Mar 21 12:41:11.224557 kubelet[2756]: I0321 12:41:11.224511 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p62f\" (UniqueName: \"kubernetes.io/projected/b7f90655-41a0-4696-86f4-6b686e37641e-kube-api-access-7p62f\") pod \"calico-apiserver-759b875587-qqd6m\" (UID: \"b7f90655-41a0-4696-86f4-6b686e37641e\") " pod="calico-apiserver/calico-apiserver-759b875587-qqd6m" Mar 21 12:41:11.224557 kubelet[2756]: I0321 12:41:11.224560 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/41b7c426-c55c-460c-be3a-e599d348f902-calico-apiserver-certs\") pod \"calico-apiserver-759b875587-752vj\" (UID: \"41b7c426-c55c-460c-be3a-e599d348f902\") " pod="calico-apiserver/calico-apiserver-759b875587-752vj" Mar 21 12:41:11.224748 kubelet[2756]: I0321 12:41:11.224589 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9mwf2\" (UniqueName: \"kubernetes.io/projected/0c69e1ad-ffb2-41ac-95f4-406788645a5d-kube-api-access-9mwf2\") pod \"calico-kube-controllers-78d6c95c47-bvbbf\" (UID: \"0c69e1ad-ffb2-41ac-95f4-406788645a5d\") " pod="calico-system/calico-kube-controllers-78d6c95c47-bvbbf" Mar 21 12:41:11.224748 kubelet[2756]: I0321 12:41:11.224613 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j74qr\" (UniqueName: \"kubernetes.io/projected/7da0b772-0c89-4e27-8a75-aac98747e6ec-kube-api-access-j74qr\") pod \"coredns-7db6d8ff4d-tvb4m\" (UID: \"7da0b772-0c89-4e27-8a75-aac98747e6ec\") " pod="kube-system/coredns-7db6d8ff4d-tvb4m" Mar 21 12:41:11.224748 kubelet[2756]: I0321 12:41:11.224642 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c69e1ad-ffb2-41ac-95f4-406788645a5d-tigera-ca-bundle\") pod \"calico-kube-controllers-78d6c95c47-bvbbf\" (UID: \"0c69e1ad-ffb2-41ac-95f4-406788645a5d\") " pod="calico-system/calico-kube-controllers-78d6c95c47-bvbbf" Mar 21 12:41:11.224748 kubelet[2756]: I0321 12:41:11.224665 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb85v\" (UniqueName: \"kubernetes.io/projected/6b46fde4-64a4-4f71-93be-1fefe5acc154-kube-api-access-xb85v\") pod \"coredns-7db6d8ff4d-g77jd\" (UID: \"6b46fde4-64a4-4f71-93be-1fefe5acc154\") " pod="kube-system/coredns-7db6d8ff4d-g77jd" Mar 21 12:41:11.224748 kubelet[2756]: I0321 12:41:11.224689 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6b46fde4-64a4-4f71-93be-1fefe5acc154-config-volume\") pod \"coredns-7db6d8ff4d-g77jd\" (UID: \"6b46fde4-64a4-4f71-93be-1fefe5acc154\") " pod="kube-system/coredns-7db6d8ff4d-g77jd" Mar 21 12:41:11.224872 kubelet[2756]: I0321 12:41:11.224714 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/b7f90655-41a0-4696-86f4-6b686e37641e-calico-apiserver-certs\") pod \"calico-apiserver-759b875587-qqd6m\" (UID: \"b7f90655-41a0-4696-86f4-6b686e37641e\") " pod="calico-apiserver/calico-apiserver-759b875587-qqd6m" Mar 21 12:41:11.224872 kubelet[2756]: I0321 12:41:11.224781 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpwlk\" (UniqueName: \"kubernetes.io/projected/41b7c426-c55c-460c-be3a-e599d348f902-kube-api-access-zpwlk\") pod \"calico-apiserver-759b875587-752vj\" (UID: \"41b7c426-c55c-460c-be3a-e599d348f902\") " pod="calico-apiserver/calico-apiserver-759b875587-752vj" Mar 21 12:41:11.224872 kubelet[2756]: I0321 12:41:11.224829 2756 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7da0b772-0c89-4e27-8a75-aac98747e6ec-config-volume\") pod \"coredns-7db6d8ff4d-tvb4m\" (UID: \"7da0b772-0c89-4e27-8a75-aac98747e6ec\") " pod="kube-system/coredns-7db6d8ff4d-tvb4m" Mar 21 12:41:11.386733 containerd[1520]: time="2025-03-21T12:41:11.386524949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g77jd,Uid:6b46fde4-64a4-4f71-93be-1fefe5acc154,Namespace:kube-system,Attempt:0,}" Mar 21 12:41:11.391181 containerd[1520]: time="2025-03-21T12:41:11.391153090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-752vj,Uid:41b7c426-c55c-460c-be3a-e599d348f902,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:41:11.396982 containerd[1520]: time="2025-03-21T12:41:11.396889076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-qqd6m,Uid:b7f90655-41a0-4696-86f4-6b686e37641e,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:41:11.402167 containerd[1520]: time="2025-03-21T12:41:11.402058644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvb4m,Uid:7da0b772-0c89-4e27-8a75-aac98747e6ec,Namespace:kube-system,Attempt:0,}" Mar 21 12:41:11.410414 containerd[1520]: time="2025-03-21T12:41:11.410379725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d6c95c47-bvbbf,Uid:0c69e1ad-ffb2-41ac-95f4-406788645a5d,Namespace:calico-system,Attempt:0,}" Mar 21 12:41:11.479171 containerd[1520]: time="2025-03-21T12:41:11.478905432Z" level=error msg="Failed to destroy network for sandbox \"ee86610b45f0948573f685b67074e30b6c9f461ffcbcbe3f3c48a9b34c443ea9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.480352 containerd[1520]: time="2025-03-21T12:41:11.480298885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-qqd6m,Uid:b7f90655-41a0-4696-86f4-6b686e37641e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee86610b45f0948573f685b67074e30b6c9f461ffcbcbe3f3c48a9b34c443ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.480646 kubelet[2756]: E0321 12:41:11.480596 2756 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee86610b45f0948573f685b67074e30b6c9f461ffcbcbe3f3c48a9b34c443ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.480713 kubelet[2756]: E0321 12:41:11.480675 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee86610b45f0948573f685b67074e30b6c9f461ffcbcbe3f3c48a9b34c443ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759b875587-qqd6m" Mar 21 12:41:11.480713 kubelet[2756]: E0321 12:41:11.480695 2756 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ee86610b45f0948573f685b67074e30b6c9f461ffcbcbe3f3c48a9b34c443ea9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759b875587-qqd6m" Mar 21 12:41:11.480766 kubelet[2756]: E0321 12:41:11.480738 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-759b875587-qqd6m_calico-apiserver(b7f90655-41a0-4696-86f4-6b686e37641e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-759b875587-qqd6m_calico-apiserver(b7f90655-41a0-4696-86f4-6b686e37641e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ee86610b45f0948573f685b67074e30b6c9f461ffcbcbe3f3c48a9b34c443ea9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759b875587-qqd6m" podUID="b7f90655-41a0-4696-86f4-6b686e37641e" Mar 21 12:41:11.486776 containerd[1520]: time="2025-03-21T12:41:11.486154570Z" level=error msg="Failed to destroy network for sandbox \"1a7264f0574cdb06f89410e14fcba7dc4649a4b4053ce30ccddb5ad74476efd2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.486776 containerd[1520]: time="2025-03-21T12:41:11.486598333Z" level=error msg="Failed to destroy network for sandbox \"12c2c8dbf045b708d1a09cd62908f36bf5d0ff4bfa27a6a8de24406e289392b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.488246 containerd[1520]: time="2025-03-21T12:41:11.488216683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g77jd,Uid:6b46fde4-64a4-4f71-93be-1fefe5acc154,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7264f0574cdb06f89410e14fcba7dc4649a4b4053ce30ccddb5ad74476efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.489546 containerd[1520]: time="2025-03-21T12:41:11.489358917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-752vj,Uid:41b7c426-c55c-460c-be3a-e599d348f902,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c2c8dbf045b708d1a09cd62908f36bf5d0ff4bfa27a6a8de24406e289392b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.489754 kubelet[2756]: E0321 12:41:11.489618 2756 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c2c8dbf045b708d1a09cd62908f36bf5d0ff4bfa27a6a8de24406e289392b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.489754 kubelet[2756]: E0321 12:41:11.489673 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c2c8dbf045b708d1a09cd62908f36bf5d0ff4bfa27a6a8de24406e289392b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759b875587-752vj" Mar 21 12:41:11.489754 kubelet[2756]: E0321 12:41:11.489692 2756 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12c2c8dbf045b708d1a09cd62908f36bf5d0ff4bfa27a6a8de24406e289392b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-759b875587-752vj" Mar 21 12:41:11.489858 kubelet[2756]: E0321 12:41:11.489730 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-759b875587-752vj_calico-apiserver(41b7c426-c55c-460c-be3a-e599d348f902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-759b875587-752vj_calico-apiserver(41b7c426-c55c-460c-be3a-e599d348f902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12c2c8dbf045b708d1a09cd62908f36bf5d0ff4bfa27a6a8de24406e289392b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-759b875587-752vj" podUID="41b7c426-c55c-460c-be3a-e599d348f902" Mar 21 12:41:11.490422 kubelet[2756]: E0321 12:41:11.489939 2756 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7264f0574cdb06f89410e14fcba7dc4649a4b4053ce30ccddb5ad74476efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.490422 kubelet[2756]: E0321 12:41:11.489968 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7264f0574cdb06f89410e14fcba7dc4649a4b4053ce30ccddb5ad74476efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g77jd" Mar 21 12:41:11.490422 kubelet[2756]: E0321 12:41:11.489985 2756 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a7264f0574cdb06f89410e14fcba7dc4649a4b4053ce30ccddb5ad74476efd2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-g77jd" Mar 21 12:41:11.490517 kubelet[2756]: E0321 12:41:11.490026 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-g77jd_kube-system(6b46fde4-64a4-4f71-93be-1fefe5acc154)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-g77jd_kube-system(6b46fde4-64a4-4f71-93be-1fefe5acc154)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a7264f0574cdb06f89410e14fcba7dc4649a4b4053ce30ccddb5ad74476efd2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-g77jd" podUID="6b46fde4-64a4-4f71-93be-1fefe5acc154" Mar 21 12:41:11.491579 containerd[1520]: time="2025-03-21T12:41:11.491534577Z" level=error msg="Failed to destroy network for sandbox \"55aa71933aa97531830ffb002d64d1f380420a981f58dcb017e890b24b701fa3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.492132 containerd[1520]: time="2025-03-21T12:41:11.492109341Z" level=error msg="Failed to destroy network for sandbox \"3bd783c482a58132e0dd591e5e1d41f71010155fde629bfea5b127015fc3e44e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.492880 containerd[1520]: time="2025-03-21T12:41:11.492778603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d6c95c47-bvbbf,Uid:0c69e1ad-ffb2-41ac-95f4-406788645a5d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aa71933aa97531830ffb002d64d1f380420a981f58dcb017e890b24b701fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.492992 kubelet[2756]: E0321 12:41:11.492929 2756 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aa71933aa97531830ffb002d64d1f380420a981f58dcb017e890b24b701fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.492992 kubelet[2756]: E0321 12:41:11.492957 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aa71933aa97531830ffb002d64d1f380420a981f58dcb017e890b24b701fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78d6c95c47-bvbbf" Mar 21 12:41:11.492992 kubelet[2756]: E0321 12:41:11.492974 2756 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"55aa71933aa97531830ffb002d64d1f380420a981f58dcb017e890b24b701fa3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78d6c95c47-bvbbf" Mar 21 12:41:11.493076 kubelet[2756]: E0321 12:41:11.493008 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78d6c95c47-bvbbf_calico-system(0c69e1ad-ffb2-41ac-95f4-406788645a5d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78d6c95c47-bvbbf_calico-system(0c69e1ad-ffb2-41ac-95f4-406788645a5d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"55aa71933aa97531830ffb002d64d1f380420a981f58dcb017e890b24b701fa3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78d6c95c47-bvbbf" podUID="0c69e1ad-ffb2-41ac-95f4-406788645a5d" Mar 21 12:41:11.494077 containerd[1520]: time="2025-03-21T12:41:11.494022719Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvb4m,Uid:7da0b772-0c89-4e27-8a75-aac98747e6ec,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd783c482a58132e0dd591e5e1d41f71010155fde629bfea5b127015fc3e44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.494203 kubelet[2756]: E0321 12:41:11.494171 2756 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd783c482a58132e0dd591e5e1d41f71010155fde629bfea5b127015fc3e44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:11.494263 kubelet[2756]: E0321 12:41:11.494201 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd783c482a58132e0dd591e5e1d41f71010155fde629bfea5b127015fc3e44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tvb4m" Mar 21 12:41:11.494263 kubelet[2756]: E0321 12:41:11.494215 2756 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bd783c482a58132e0dd591e5e1d41f71010155fde629bfea5b127015fc3e44e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tvb4m" Mar 21 12:41:11.494399 kubelet[2756]: E0321 12:41:11.494243 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tvb4m_kube-system(7da0b772-0c89-4e27-8a75-aac98747e6ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tvb4m_kube-system(7da0b772-0c89-4e27-8a75-aac98747e6ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bd783c482a58132e0dd591e5e1d41f71010155fde629bfea5b127015fc3e44e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tvb4m" podUID="7da0b772-0c89-4e27-8a75-aac98747e6ec" Mar 21 12:41:11.709269 containerd[1520]: time="2025-03-21T12:41:11.708990722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 21 12:41:12.300523 systemd[1]: Started sshd@8-10.0.0.131:22-10.0.0.1:42102.service - OpenSSH per-connection server daemon (10.0.0.1:42102). Mar 21 12:41:12.353911 sshd[3706]: Accepted publickey for core from 10.0.0.1 port 42102 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:12.355398 sshd-session[3706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:12.359551 systemd-logind[1497]: New session 9 of user core. Mar 21 12:41:12.368481 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 21 12:41:12.473060 sshd[3708]: Connection closed by 10.0.0.1 port 42102 Mar 21 12:41:12.473440 sshd-session[3706]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:12.477204 systemd[1]: sshd@8-10.0.0.131:22-10.0.0.1:42102.service: Deactivated successfully. Mar 21 12:41:12.479305 systemd[1]: session-9.scope: Deactivated successfully. Mar 21 12:41:12.479996 systemd-logind[1497]: Session 9 logged out. Waiting for processes to exit. Mar 21 12:41:12.480872 systemd-logind[1497]: Removed session 9. Mar 21 12:41:12.629880 systemd[1]: Created slice kubepods-besteffort-pod0e815f45_8221_492c_b826_aef1cf581aeb.slice - libcontainer container kubepods-besteffort-pod0e815f45_8221_492c_b826_aef1cf581aeb.slice. Mar 21 12:41:12.631851 containerd[1520]: time="2025-03-21T12:41:12.631819740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4l5h,Uid:0e815f45-8221-492c-b826-aef1cf581aeb,Namespace:calico-system,Attempt:0,}" Mar 21 12:41:12.680208 containerd[1520]: time="2025-03-21T12:41:12.680157521Z" level=error msg="Failed to destroy network for sandbox \"ba7fe0ac119a6720a4bcaac851903db204146de12258d3eba7405e11de01b8e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:12.681377 containerd[1520]: time="2025-03-21T12:41:12.681309599Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4l5h,Uid:0e815f45-8221-492c-b826-aef1cf581aeb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7fe0ac119a6720a4bcaac851903db204146de12258d3eba7405e11de01b8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:12.681545 kubelet[2756]: E0321 12:41:12.681505 2756 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7fe0ac119a6720a4bcaac851903db204146de12258d3eba7405e11de01b8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:41:12.681856 kubelet[2756]: E0321 12:41:12.681554 2756 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7fe0ac119a6720a4bcaac851903db204146de12258d3eba7405e11de01b8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:12.681856 kubelet[2756]: E0321 12:41:12.681572 2756 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba7fe0ac119a6720a4bcaac851903db204146de12258d3eba7405e11de01b8e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-w4l5h" Mar 21 12:41:12.681856 kubelet[2756]: E0321 12:41:12.681610 2756 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-w4l5h_calico-system(0e815f45-8221-492c-b826-aef1cf581aeb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-w4l5h_calico-system(0e815f45-8221-492c-b826-aef1cf581aeb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba7fe0ac119a6720a4bcaac851903db204146de12258d3eba7405e11de01b8e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-w4l5h" podUID="0e815f45-8221-492c-b826-aef1cf581aeb" Mar 21 12:41:12.682348 systemd[1]: run-netns-cni\x2d6a815db4\x2da1f2\x2dfaa0\x2d9bdd\x2d472f5911cf2b.mount: Deactivated successfully. Mar 21 12:41:15.514056 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424597287.mount: Deactivated successfully. Mar 21 12:41:16.074084 containerd[1520]: time="2025-03-21T12:41:16.074036385Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:16.074877 containerd[1520]: time="2025-03-21T12:41:16.074808058Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 21 12:41:16.076016 containerd[1520]: time="2025-03-21T12:41:16.075992685Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:16.077766 containerd[1520]: time="2025-03-21T12:41:16.077727429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:16.078164 containerd[1520]: time="2025-03-21T12:41:16.078119924Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 4.369091617s" Mar 21 12:41:16.078206 containerd[1520]: time="2025-03-21T12:41:16.078164402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 21 12:41:16.086894 containerd[1520]: time="2025-03-21T12:41:16.086863612Z" level=info msg="CreateContainer within sandbox \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 21 12:41:16.097885 containerd[1520]: time="2025-03-21T12:41:16.097850426Z" level=info msg="Container c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:16.108844 containerd[1520]: time="2025-03-21T12:41:16.108790780Z" level=info msg="CreateContainer within sandbox \"c37e8d8f5ceb4a57577af0c57167989da8ff02a7b9a5b13ac40ca2640fe50501\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d\"" Mar 21 12:41:16.109605 containerd[1520]: time="2025-03-21T12:41:16.109567854Z" level=info msg="StartContainer for \"c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d\"" Mar 21 12:41:16.111058 containerd[1520]: time="2025-03-21T12:41:16.111033467Z" level=info msg="connecting to shim c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d" address="unix:///run/containerd/s/e3ab5c1b2a466456842c25f624e5056e9bf8a507086573fe07e67bd8baf783ed" protocol=ttrpc version=3 Mar 21 12:41:16.130665 systemd[1]: Started cri-containerd-c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d.scope - libcontainer container c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d. Mar 21 12:41:16.177438 containerd[1520]: time="2025-03-21T12:41:16.177398513Z" level=info msg="StartContainer for \"c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d\" returns successfully" Mar 21 12:41:16.235640 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 21 12:41:16.235755 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 21 12:41:16.731998 kubelet[2756]: I0321 12:41:16.731927 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mzcn6" podStartSLOduration=1.416668797 podStartE2EDuration="16.731912808s" podCreationTimestamp="2025-03-21 12:41:00 +0000 UTC" firstStartedPulling="2025-03-21 12:41:00.763624862 +0000 UTC m=+21.229664856" lastFinishedPulling="2025-03-21 12:41:16.078868873 +0000 UTC m=+36.544908867" observedRunningTime="2025-03-21 12:41:16.731713976 +0000 UTC m=+37.197753970" watchObservedRunningTime="2025-03-21 12:41:16.731912808 +0000 UTC m=+37.197952802" Mar 21 12:41:17.488903 systemd[1]: Started sshd@9-10.0.0.131:22-10.0.0.1:40858.service - OpenSSH per-connection server daemon (10.0.0.1:40858). Mar 21 12:41:17.565429 sshd[3869]: Accepted publickey for core from 10.0.0.1 port 40858 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:17.566994 sshd-session[3869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:17.576878 systemd-logind[1497]: New session 10 of user core. Mar 21 12:41:17.583957 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 21 12:41:17.721000 kubelet[2756]: I0321 12:41:17.720968 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:17.832948 sshd[3926]: Connection closed by 10.0.0.1 port 40858 Mar 21 12:41:17.833317 sshd-session[3869]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:17.838263 systemd[1]: sshd@9-10.0.0.131:22-10.0.0.1:40858.service: Deactivated successfully. Mar 21 12:41:17.841117 systemd[1]: session-10.scope: Deactivated successfully. Mar 21 12:41:17.841975 systemd-logind[1497]: Session 10 logged out. Waiting for processes to exit. Mar 21 12:41:17.843026 systemd-logind[1497]: Removed session 10. Mar 21 12:41:21.785160 kubelet[2756]: I0321 12:41:21.785109 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:22.850580 systemd[1]: Started sshd@10-10.0.0.131:22-10.0.0.1:40864.service - OpenSSH per-connection server daemon (10.0.0.1:40864). Mar 21 12:41:22.918523 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 40864 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:22.920569 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:22.926383 systemd-logind[1497]: New session 11 of user core. Mar 21 12:41:22.932075 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 21 12:41:22.995373 kernel: bpftool[4115]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 21 12:41:22.998854 kubelet[2756]: I0321 12:41:22.998758 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:23.068427 sshd[4082]: Connection closed by 10.0.0.1 port 40864 Mar 21 12:41:23.069012 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:23.080724 systemd[1]: sshd@10-10.0.0.131:22-10.0.0.1:40864.service: Deactivated successfully. Mar 21 12:41:23.084464 systemd[1]: session-11.scope: Deactivated successfully. Mar 21 12:41:23.085310 systemd-logind[1497]: Session 11 logged out. Waiting for processes to exit. Mar 21 12:41:23.089675 systemd[1]: Started sshd@11-10.0.0.131:22-10.0.0.1:40876.service - OpenSSH per-connection server daemon (10.0.0.1:40876). Mar 21 12:41:23.092232 systemd-logind[1497]: Removed session 11. Mar 21 12:41:23.139514 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 40876 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:23.140930 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:23.146423 systemd-logind[1497]: New session 12 of user core. Mar 21 12:41:23.152560 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 21 12:41:23.225959 containerd[1520]: time="2025-03-21T12:41:23.225896585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d\" id:\"6ab676a3fffb15dba7c1cac2b32a3ede8f0ecb8e88a1c2c06e3312f8ca5ca8c2\" pid:4141 exit_status:1 exited_at:{seconds:1742560883 nanos:225163098}" Mar 21 12:41:23.249669 systemd-networkd[1436]: vxlan.calico: Link UP Mar 21 12:41:23.249681 systemd-networkd[1436]: vxlan.calico: Gained carrier Mar 21 12:41:23.337982 containerd[1520]: time="2025-03-21T12:41:23.337945478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d\" id:\"9f5c6bb524cd8f1e97a2e0d176dc7b954463219f54b0d867ea9f611a06e2f209\" pid:4208 exit_status:1 exited_at:{seconds:1742560883 nanos:337022391}" Mar 21 12:41:23.348596 sshd[4129]: Connection closed by 10.0.0.1 port 40876 Mar 21 12:41:23.351428 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:23.362719 systemd[1]: sshd@11-10.0.0.131:22-10.0.0.1:40876.service: Deactivated successfully. Mar 21 12:41:23.367889 systemd[1]: session-12.scope: Deactivated successfully. Mar 21 12:41:23.368979 systemd-logind[1497]: Session 12 logged out. Waiting for processes to exit. Mar 21 12:41:23.374538 systemd[1]: Started sshd@12-10.0.0.131:22-10.0.0.1:40884.service - OpenSSH per-connection server daemon (10.0.0.1:40884). Mar 21 12:41:23.375487 systemd-logind[1497]: Removed session 12. Mar 21 12:41:23.428764 sshd[4228]: Accepted publickey for core from 10.0.0.1 port 40884 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:23.430443 sshd-session[4228]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:23.435376 systemd-logind[1497]: New session 13 of user core. Mar 21 12:41:23.440464 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 21 12:41:23.565601 sshd[4231]: Connection closed by 10.0.0.1 port 40884 Mar 21 12:41:23.565935 sshd-session[4228]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:23.570274 systemd[1]: sshd@12-10.0.0.131:22-10.0.0.1:40884.service: Deactivated successfully. Mar 21 12:41:23.572158 systemd[1]: session-13.scope: Deactivated successfully. Mar 21 12:41:23.572831 systemd-logind[1497]: Session 13 logged out. Waiting for processes to exit. Mar 21 12:41:23.573661 systemd-logind[1497]: Removed session 13. Mar 21 12:41:23.624402 containerd[1520]: time="2025-03-21T12:41:23.624357466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvb4m,Uid:7da0b772-0c89-4e27-8a75-aac98747e6ec,Namespace:kube-system,Attempt:0,}" Mar 21 12:41:23.625126 containerd[1520]: time="2025-03-21T12:41:23.624526758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-752vj,Uid:41b7c426-c55c-460c-be3a-e599d348f902,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:41:23.625126 containerd[1520]: time="2025-03-21T12:41:23.624696370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-qqd6m,Uid:b7f90655-41a0-4696-86f4-6b686e37641e,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:41:23.625126 containerd[1520]: time="2025-03-21T12:41:23.624991709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g77jd,Uid:6b46fde4-64a4-4f71-93be-1fefe5acc154,Namespace:kube-system,Attempt:0,}" Mar 21 12:41:23.625126 containerd[1520]: time="2025-03-21T12:41:23.625022428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d6c95c47-bvbbf,Uid:0c69e1ad-ffb2-41ac-95f4-406788645a5d,Namespace:calico-system,Attempt:0,}" Mar 21 12:41:24.101275 systemd-networkd[1436]: cali9235b1c8605: Link UP Mar 21 12:41:24.102293 systemd-networkd[1436]: cali9235b1c8605: Gained carrier Mar 21 12:41:24.122358 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4307] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0 calico-apiserver-759b875587- calico-apiserver b7f90655-41a0-4696-86f4-6b686e37641e 720 0 2025-03-21 12:41:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:759b875587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-759b875587-qqd6m eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9235b1c8605 [] []}} ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-" Mar 21 12:41:24.122358 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4307] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.122358 containerd[1520]: 2025-03-21 12:41:24.061 [INFO][4361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" HandleID="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Workload="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.071 [INFO][4361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" HandleID="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Workload="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003654c0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-759b875587-qqd6m", "timestamp":"2025-03-21 12:41:24.061909535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.071 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.074 [INFO][4361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" host="localhost" Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.079 [INFO][4361] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.082 [INFO][4361] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.083 [INFO][4361] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.085 [INFO][4361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.122579 containerd[1520]: 2025-03-21 12:41:24.085 [INFO][4361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" host="localhost" Mar 21 12:41:24.122865 containerd[1520]: 2025-03-21 12:41:24.086 [INFO][4361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e Mar 21 12:41:24.122865 containerd[1520]: 2025-03-21 12:41:24.090 [INFO][4361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" host="localhost" Mar 21 12:41:24.122865 containerd[1520]: 2025-03-21 12:41:24.095 [INFO][4361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" host="localhost" Mar 21 12:41:24.122865 containerd[1520]: 2025-03-21 12:41:24.095 [INFO][4361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" host="localhost" Mar 21 12:41:24.122865 containerd[1520]: 2025-03-21 12:41:24.095 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:41:24.122865 containerd[1520]: 2025-03-21 12:41:24.095 [INFO][4361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" HandleID="k8s-pod-network.ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Workload="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.122994 containerd[1520]: 2025-03-21 12:41:24.098 [INFO][4307] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0", GenerateName:"calico-apiserver-759b875587-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7f90655-41a0-4696-86f4-6b686e37641e", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759b875587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-759b875587-qqd6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9235b1c8605", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.123052 containerd[1520]: 2025-03-21 12:41:24.099 [INFO][4307] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.123052 containerd[1520]: 2025-03-21 12:41:24.099 [INFO][4307] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9235b1c8605 ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.123052 containerd[1520]: 2025-03-21 12:41:24.103 [INFO][4307] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.123134 containerd[1520]: 2025-03-21 12:41:24.103 [INFO][4307] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0", GenerateName:"calico-apiserver-759b875587-", Namespace:"calico-apiserver", SelfLink:"", UID:"b7f90655-41a0-4696-86f4-6b686e37641e", ResourceVersion:"720", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759b875587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e", Pod:"calico-apiserver-759b875587-qqd6m", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9235b1c8605", MAC:"f2:a7:8f:75:02:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.123186 containerd[1520]: 2025-03-21 12:41:24.118 [INFO][4307] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-qqd6m" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--qqd6m-eth0" Mar 21 12:41:24.138211 systemd-networkd[1436]: cali6058e183544: Link UP Mar 21 12:41:24.138427 systemd-networkd[1436]: cali6058e183544: Gained carrier Mar 21 12:41:24.151742 containerd[1520]: 2025-03-21 12:41:23.983 [INFO][4281] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0 coredns-7db6d8ff4d- kube-system 7da0b772-0c89-4e27-8a75-aac98747e6ec 717 0 2025-03-21 12:40:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-tvb4m eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6058e183544 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-" Mar 21 12:41:24.151742 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4281] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.151742 containerd[1520]: 2025-03-21 12:41:24.061 [INFO][4358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" HandleID="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Workload="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" HandleID="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Workload="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003874b0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-tvb4m", "timestamp":"2025-03-21 12:41:24.061904856 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.096 [INFO][4358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.096 [INFO][4358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.097 [INFO][4358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" host="localhost" Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.101 [INFO][4358] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.105 [INFO][4358] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.107 [INFO][4358] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.108 [INFO][4358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.154370 containerd[1520]: 2025-03-21 12:41:24.109 [INFO][4358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" host="localhost" Mar 21 12:41:24.154613 containerd[1520]: 2025-03-21 12:41:24.118 [INFO][4358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac Mar 21 12:41:24.154613 containerd[1520]: 2025-03-21 12:41:24.123 [INFO][4358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" host="localhost" Mar 21 12:41:24.154613 containerd[1520]: 2025-03-21 12:41:24.130 [INFO][4358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" host="localhost" Mar 21 12:41:24.154613 containerd[1520]: 2025-03-21 12:41:24.130 [INFO][4358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" host="localhost" Mar 21 12:41:24.154613 containerd[1520]: 2025-03-21 12:41:24.130 [INFO][4358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:41:24.154613 containerd[1520]: 2025-03-21 12:41:24.130 [INFO][4358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" HandleID="k8s-pod-network.2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Workload="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.154730 containerd[1520]: 2025-03-21 12:41:24.135 [INFO][4281] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7da0b772-0c89-4e27-8a75-aac98747e6ec", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-tvb4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6058e183544", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.154799 containerd[1520]: 2025-03-21 12:41:24.135 [INFO][4281] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.154799 containerd[1520]: 2025-03-21 12:41:24.135 [INFO][4281] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6058e183544 ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.154799 containerd[1520]: 2025-03-21 12:41:24.137 [INFO][4281] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.154871 containerd[1520]: 2025-03-21 12:41:24.139 [INFO][4281] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7da0b772-0c89-4e27-8a75-aac98747e6ec", ResourceVersion:"717", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac", Pod:"coredns-7db6d8ff4d-tvb4m", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6058e183544", MAC:"36:35:29:41:5e:d0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.154871 containerd[1520]: 2025-03-21 12:41:24.148 [INFO][4281] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tvb4m" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--tvb4m-eth0" Mar 21 12:41:24.177393 systemd-networkd[1436]: caliab644a02157: Link UP Mar 21 12:41:24.178205 systemd-networkd[1436]: caliab644a02157: Gained carrier Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4289] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--759b875587--752vj-eth0 calico-apiserver-759b875587- calico-apiserver 41b7c426-c55c-460c-be3a-e599d348f902 719 0 2025-03-21 12:41:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:759b875587 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-759b875587-752vj eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliab644a02157 [] []}} ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4289] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.062 [INFO][4365] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" HandleID="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Workload="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4365] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" HandleID="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Workload="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ed0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-759b875587-752vj", "timestamp":"2025-03-21 12:41:24.062315329 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4365] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.131 [INFO][4365] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.131 [INFO][4365] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.133 [INFO][4365] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.139 [INFO][4365] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.147 [INFO][4365] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.150 [INFO][4365] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.152 [INFO][4365] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.152 [INFO][4365] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.153 [INFO][4365] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07 Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.161 [INFO][4365] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.166 [INFO][4365] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.167 [INFO][4365] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" host="localhost" Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.167 [INFO][4365] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:41:24.194148 containerd[1520]: 2025-03-21 12:41:24.167 [INFO][4365] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" HandleID="k8s-pod-network.555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Workload="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.194902 containerd[1520]: 2025-03-21 12:41:24.171 [INFO][4289] cni-plugin/k8s.go 386: Populated endpoint ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--759b875587--752vj-eth0", GenerateName:"calico-apiserver-759b875587-", Namespace:"calico-apiserver", SelfLink:"", UID:"41b7c426-c55c-460c-be3a-e599d348f902", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759b875587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-759b875587-752vj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab644a02157", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.194902 containerd[1520]: 2025-03-21 12:41:24.171 [INFO][4289] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.194902 containerd[1520]: 2025-03-21 12:41:24.171 [INFO][4289] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliab644a02157 ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.194902 containerd[1520]: 2025-03-21 12:41:24.178 [INFO][4289] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.194902 containerd[1520]: 2025-03-21 12:41:24.182 [INFO][4289] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--759b875587--752vj-eth0", GenerateName:"calico-apiserver-759b875587-", Namespace:"calico-apiserver", SelfLink:"", UID:"41b7c426-c55c-460c-be3a-e599d348f902", ResourceVersion:"719", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"759b875587", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07", Pod:"calico-apiserver-759b875587-752vj", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliab644a02157", MAC:"8e:84:b5:52:5a:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.194902 containerd[1520]: 2025-03-21 12:41:24.190 [INFO][4289] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" Namespace="calico-apiserver" Pod="calico-apiserver-759b875587-752vj" WorkloadEndpoint="localhost-k8s-calico--apiserver--759b875587--752vj-eth0" Mar 21 12:41:24.215569 systemd-networkd[1436]: cali04cc2026174: Link UP Mar 21 12:41:24.215938 systemd-networkd[1436]: cali04cc2026174: Gained carrier Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:23.983 [INFO][4320] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0 coredns-7db6d8ff4d- kube-system 6b46fde4-64a4-4f71-93be-1fefe5acc154 714 0 2025-03-21 12:40:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-g77jd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali04cc2026174 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4320] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.063 [INFO][4359] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" HandleID="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Workload="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4359] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" HandleID="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Workload="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000537a20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-g77jd", "timestamp":"2025-03-21 12:41:24.063213927 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.072 [INFO][4359] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.167 [INFO][4359] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.167 [INFO][4359] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.169 [INFO][4359] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.175 [INFO][4359] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.185 [INFO][4359] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.187 [INFO][4359] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.191 [INFO][4359] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.191 [INFO][4359] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.195 [INFO][4359] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7 Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.200 [INFO][4359] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.207 [INFO][4359] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.208 [INFO][4359] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" host="localhost" Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.208 [INFO][4359] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:41:24.236881 containerd[1520]: 2025-03-21 12:41:24.208 [INFO][4359] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" HandleID="k8s-pod-network.e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Workload="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.237727 containerd[1520]: 2025-03-21 12:41:24.212 [INFO][4320] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6b46fde4-64a4-4f71-93be-1fefe5acc154", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-g77jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04cc2026174", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.237727 containerd[1520]: 2025-03-21 12:41:24.213 [INFO][4320] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.237727 containerd[1520]: 2025-03-21 12:41:24.213 [INFO][4320] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali04cc2026174 ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.237727 containerd[1520]: 2025-03-21 12:41:24.216 [INFO][4320] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.237727 containerd[1520]: 2025-03-21 12:41:24.217 [INFO][4320] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"6b46fde4-64a4-4f71-93be-1fefe5acc154", ResourceVersion:"714", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 40, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7", Pod:"coredns-7db6d8ff4d-g77jd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali04cc2026174", MAC:"76:1b:18:4e:ad:41", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.237727 containerd[1520]: 2025-03-21 12:41:24.226 [INFO][4320] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-g77jd" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--g77jd-eth0" Mar 21 12:41:24.264268 systemd-networkd[1436]: cali4c05e83ac84: Link UP Mar 21 12:41:24.265148 systemd-networkd[1436]: cali4c05e83ac84: Gained carrier Mar 21 12:41:24.276606 containerd[1520]: time="2025-03-21T12:41:24.276490598Z" level=info msg="connecting to shim 555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07" address="unix:///run/containerd/s/f76640d2b438b85c25d8793a7191448d1cd48bbf872c3f1c82a0858ab3e65de7" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:24.277104 containerd[1520]: time="2025-03-21T12:41:24.277063169Z" level=info msg="connecting to shim 2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac" address="unix:///run/containerd/s/d97192ed47616180052d0776a8558ea4941209a7cb579025e28099fef0a3dba1" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:24.278945 containerd[1520]: time="2025-03-21T12:41:24.278525439Z" level=info msg="connecting to shim ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e" address="unix:///run/containerd/s/13fd4f9cd8a7b181207a915a6f1c5246a778da224cd5930673d4c61283d26b45" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4332] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0 calico-kube-controllers-78d6c95c47- calico-system 0c69e1ad-ffb2-41ac-95f4-406788645a5d 721 0 2025-03-21 12:41:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78d6c95c47 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-78d6c95c47-bvbbf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali4c05e83ac84 [] []}} ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:23.984 [INFO][4332] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.062 [INFO][4363] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" HandleID="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Workload="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.076 [INFO][4363] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" HandleID="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Workload="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002000b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-78d6c95c47-bvbbf", "timestamp":"2025-03-21 12:41:24.062019881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.076 [INFO][4363] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.208 [INFO][4363] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.208 [INFO][4363] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.209 [INFO][4363] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.214 [INFO][4363] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.222 [INFO][4363] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.236 [INFO][4363] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.244 [INFO][4363] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.244 [INFO][4363] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.246 [INFO][4363] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292 Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.252 [INFO][4363] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.258 [INFO][4363] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.258 [INFO][4363] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" host="localhost" Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.258 [INFO][4363] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:41:24.304034 containerd[1520]: 2025-03-21 12:41:24.259 [INFO][4363] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" HandleID="k8s-pod-network.df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Workload="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.304648 containerd[1520]: 2025-03-21 12:41:24.261 [INFO][4332] cni-plugin/k8s.go 386: Populated endpoint ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0", GenerateName:"calico-kube-controllers-78d6c95c47-", Namespace:"calico-system", SelfLink:"", UID:"0c69e1ad-ffb2-41ac-95f4-406788645a5d", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78d6c95c47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-78d6c95c47-bvbbf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c05e83ac84", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.304648 containerd[1520]: 2025-03-21 12:41:24.262 [INFO][4332] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.304648 containerd[1520]: 2025-03-21 12:41:24.262 [INFO][4332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c05e83ac84 ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.304648 containerd[1520]: 2025-03-21 12:41:24.265 [INFO][4332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.304648 containerd[1520]: 2025-03-21 12:41:24.266 [INFO][4332] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0", GenerateName:"calico-kube-controllers-78d6c95c47-", Namespace:"calico-system", SelfLink:"", UID:"0c69e1ad-ffb2-41ac-95f4-406788645a5d", ResourceVersion:"721", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78d6c95c47", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292", Pod:"calico-kube-controllers-78d6c95c47-bvbbf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali4c05e83ac84", MAC:"6e:1d:a0:81:ba:16", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.304648 containerd[1520]: 2025-03-21 12:41:24.294 [INFO][4332] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" Namespace="calico-system" Pod="calico-kube-controllers-78d6c95c47-bvbbf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--78d6c95c47--bvbbf-eth0" Mar 21 12:41:24.305031 containerd[1520]: time="2025-03-21T12:41:24.304181731Z" level=info msg="connecting to shim e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7" address="unix:///run/containerd/s/7557af620d06508894d5cf23e2ee1fe12c099467773e5452f2f2f5458658d13e" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:24.308524 systemd[1]: Started cri-containerd-ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e.scope - libcontainer container ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e. Mar 21 12:41:24.314008 systemd[1]: Started cri-containerd-2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac.scope - libcontainer container 2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac. Mar 21 12:41:24.315938 systemd[1]: Started cri-containerd-555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07.scope - libcontainer container 555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07. Mar 21 12:41:24.333719 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:41:24.337551 systemd[1]: Started cri-containerd-e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7.scope - libcontainer container e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7. Mar 21 12:41:24.337802 containerd[1520]: time="2025-03-21T12:41:24.337115388Z" level=info msg="connecting to shim df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292" address="unix:///run/containerd/s/9e3a74a376c1ac63f002d0ac23b41516aadd96ed45e8d7a8ab391b72ef70b94b" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:24.344305 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:41:24.346590 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:41:24.359163 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:41:24.375866 systemd[1]: Started cri-containerd-df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292.scope - libcontainer container df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292. Mar 21 12:41:24.396976 containerd[1520]: time="2025-03-21T12:41:24.396866248Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tvb4m,Uid:7da0b772-0c89-4e27-8a75-aac98747e6ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac\"" Mar 21 12:41:24.397692 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:41:24.407342 containerd[1520]: time="2025-03-21T12:41:24.405785748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-qqd6m,Uid:b7f90655-41a0-4696-86f4-6b686e37641e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e\"" Mar 21 12:41:24.407342 containerd[1520]: time="2025-03-21T12:41:24.406599159Z" level=info msg="CreateContainer within sandbox \"2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:41:24.410309 containerd[1520]: time="2025-03-21T12:41:24.410007347Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 21 12:41:24.410805 containerd[1520]: time="2025-03-21T12:41:24.410754508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-759b875587-752vj,Uid:41b7c426-c55c-460c-be3a-e599d348f902,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07\"" Mar 21 12:41:24.428350 containerd[1520]: time="2025-03-21T12:41:24.427116807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-g77jd,Uid:6b46fde4-64a4-4f71-93be-1fefe5acc154,Namespace:kube-system,Attempt:0,} returns sandbox id \"e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7\"" Mar 21 12:41:24.432551 containerd[1520]: time="2025-03-21T12:41:24.432525028Z" level=info msg="CreateContainer within sandbox \"e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:41:24.443032 containerd[1520]: time="2025-03-21T12:41:24.442983397Z" level=info msg="Container 79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:24.443099 containerd[1520]: time="2025-03-21T12:41:24.443064165Z" level=info msg="Container 545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:24.443503 containerd[1520]: time="2025-03-21T12:41:24.443427176Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d6c95c47-bvbbf,Uid:0c69e1ad-ffb2-41ac-95f4-406788645a5d,Namespace:calico-system,Attempt:0,} returns sandbox id \"df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292\"" Mar 21 12:41:24.451392 containerd[1520]: time="2025-03-21T12:41:24.451360445Z" level=info msg="CreateContainer within sandbox \"e930dab626c6c6dc9d6f22a0aeb4fa8c8a075454259179694f4a92613227b9b7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174\"" Mar 21 12:41:24.451727 containerd[1520]: time="2025-03-21T12:41:24.451676794Z" level=info msg="StartContainer for \"79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174\"" Mar 21 12:41:24.452439 containerd[1520]: time="2025-03-21T12:41:24.452414568Z" level=info msg="connecting to shim 79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174" address="unix:///run/containerd/s/7557af620d06508894d5cf23e2ee1fe12c099467773e5452f2f2f5458658d13e" protocol=ttrpc version=3 Mar 21 12:41:24.453409 containerd[1520]: time="2025-03-21T12:41:24.453375558Z" level=info msg="CreateContainer within sandbox \"2d4bd836647fe8f2612da5cacbba684942e3de09f680bf20672e92b7c99cd3ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d\"" Mar 21 12:41:24.453785 containerd[1520]: time="2025-03-21T12:41:24.453758968Z" level=info msg="StartContainer for \"545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d\"" Mar 21 12:41:24.455105 containerd[1520]: time="2025-03-21T12:41:24.455065313Z" level=info msg="connecting to shim 545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d" address="unix:///run/containerd/s/d97192ed47616180052d0776a8558ea4941209a7cb579025e28099fef0a3dba1" protocol=ttrpc version=3 Mar 21 12:41:24.471481 systemd[1]: Started cri-containerd-79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174.scope - libcontainer container 79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174. Mar 21 12:41:24.474388 systemd[1]: Started cri-containerd-545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d.scope - libcontainer container 545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d. Mar 21 12:41:24.505279 containerd[1520]: time="2025-03-21T12:41:24.505239308Z" level=info msg="StartContainer for \"79eded1809b11dd4260c47d830ff5713a3aac4e7568ce55995ae1cd01eaf5174\" returns successfully" Mar 21 12:41:24.510101 containerd[1520]: time="2025-03-21T12:41:24.510065940Z" level=info msg="StartContainer for \"545d96fc407b899577eeaa2fddbb0f722a2d4846c1be7cd09df27bd98c8ce50d\" returns successfully" Mar 21 12:41:24.625300 containerd[1520]: time="2025-03-21T12:41:24.625175970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4l5h,Uid:0e815f45-8221-492c-b826-aef1cf581aeb,Namespace:calico-system,Attempt:0,}" Mar 21 12:41:24.728900 systemd-networkd[1436]: cali4291a8dc5be: Link UP Mar 21 12:41:24.729107 systemd-networkd[1436]: cali4291a8dc5be: Gained carrier Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.659 [INFO][4758] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--w4l5h-eth0 csi-node-driver- calico-system 0e815f45-8221-492c-b826-aef1cf581aeb 593 0 2025-03-21 12:41:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-w4l5h eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4291a8dc5be [] []}} ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.659 [INFO][4758] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.688 [INFO][4772] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" HandleID="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Workload="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.698 [INFO][4772] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" HandleID="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Workload="localhost-k8s-csi--node--driver--w4l5h-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aad70), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-w4l5h", "timestamp":"2025-03-21 12:41:24.688064157 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.698 [INFO][4772] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.698 [INFO][4772] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.698 [INFO][4772] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.700 [INFO][4772] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.703 [INFO][4772] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.708 [INFO][4772] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.709 [INFO][4772] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.711 [INFO][4772] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.711 [INFO][4772] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.713 [INFO][4772] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.717 [INFO][4772] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.722 [INFO][4772] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.722 [INFO][4772] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" host="localhost" Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.722 [INFO][4772] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:41:24.744864 containerd[1520]: 2025-03-21 12:41:24.722 [INFO][4772] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" HandleID="k8s-pod-network.8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Workload="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.745543 containerd[1520]: 2025-03-21 12:41:24.725 [INFO][4758] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4l5h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e815f45-8221-492c-b826-aef1cf581aeb", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-w4l5h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4291a8dc5be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.745543 containerd[1520]: 2025-03-21 12:41:24.726 [INFO][4758] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.745543 containerd[1520]: 2025-03-21 12:41:24.726 [INFO][4758] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4291a8dc5be ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.745543 containerd[1520]: 2025-03-21 12:41:24.728 [INFO][4758] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.745543 containerd[1520]: 2025-03-21 12:41:24.728 [INFO][4758] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--w4l5h-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e815f45-8221-492c-b826-aef1cf581aeb", ResourceVersion:"593", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 41, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa", Pod:"csi-node-driver-w4l5h", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4291a8dc5be", MAC:"ae:70:90:50:ba:7e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:41:24.745543 containerd[1520]: 2025-03-21 12:41:24.738 [INFO][4758] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" Namespace="calico-system" Pod="csi-node-driver-w4l5h" WorkloadEndpoint="localhost-k8s-csi--node--driver--w4l5h-eth0" Mar 21 12:41:24.761387 kubelet[2756]: I0321 12:41:24.761308 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tvb4m" podStartSLOduration=31.761289798 podStartE2EDuration="31.761289798s" podCreationTimestamp="2025-03-21 12:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:41:24.748983784 +0000 UTC m=+45.215023778" watchObservedRunningTime="2025-03-21 12:41:24.761289798 +0000 UTC m=+45.227329792" Mar 21 12:41:24.772213 kubelet[2756]: I0321 12:41:24.771489 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-g77jd" podStartSLOduration=31.771469984 podStartE2EDuration="31.771469984s" podCreationTimestamp="2025-03-21 12:40:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:41:24.761708557 +0000 UTC m=+45.227748551" watchObservedRunningTime="2025-03-21 12:41:24.771469984 +0000 UTC m=+45.237509968" Mar 21 12:41:24.786601 containerd[1520]: time="2025-03-21T12:41:24.786237854Z" level=info msg="connecting to shim 8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa" address="unix:///run/containerd/s/c06223496e45f33e33d3113b108d1778571c87ee71acef69de3badc89cecc092" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:41:24.812475 systemd[1]: Started cri-containerd-8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa.scope - libcontainer container 8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa. Mar 21 12:41:24.825050 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:41:24.907273 containerd[1520]: time="2025-03-21T12:41:24.906362593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-w4l5h,Uid:0e815f45-8221-492c-b826-aef1cf581aeb,Namespace:calico-system,Attempt:0,} returns sandbox id \"8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa\"" Mar 21 12:41:24.958470 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Mar 21 12:41:25.214536 systemd-networkd[1436]: caliab644a02157: Gained IPv6LL Mar 21 12:41:25.278507 systemd-networkd[1436]: cali04cc2026174: Gained IPv6LL Mar 21 12:41:25.535561 systemd-networkd[1436]: cali4c05e83ac84: Gained IPv6LL Mar 21 12:41:25.727508 systemd-networkd[1436]: cali6058e183544: Gained IPv6LL Mar 21 12:41:26.046593 systemd-networkd[1436]: cali9235b1c8605: Gained IPv6LL Mar 21 12:41:26.558565 systemd-networkd[1436]: cali4291a8dc5be: Gained IPv6LL Mar 21 12:41:27.815537 containerd[1520]: time="2025-03-21T12:41:27.815467718Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:27.842731 containerd[1520]: time="2025-03-21T12:41:27.842663268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 21 12:41:27.868755 containerd[1520]: time="2025-03-21T12:41:27.868720435Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:27.909311 containerd[1520]: time="2025-03-21T12:41:27.909232935Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:27.909840 containerd[1520]: time="2025-03-21T12:41:27.909802447Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 3.499760132s" Mar 21 12:41:27.909840 containerd[1520]: time="2025-03-21T12:41:27.909845290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 21 12:41:27.910918 containerd[1520]: time="2025-03-21T12:41:27.910891943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 21 12:41:27.912066 containerd[1520]: time="2025-03-21T12:41:27.912040956Z" level=info msg="CreateContainer within sandbox \"ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 21 12:41:28.032681 containerd[1520]: time="2025-03-21T12:41:28.032619825Z" level=info msg="Container d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:28.040007 containerd[1520]: time="2025-03-21T12:41:28.039951454Z" level=info msg="CreateContainer within sandbox \"ca6b695ca4de0fb6aa998f23e6c6d54f9b9a087707e612c78b0c843a35fb534e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed\"" Mar 21 12:41:28.040592 containerd[1520]: time="2025-03-21T12:41:28.040549471Z" level=info msg="StartContainer for \"d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed\"" Mar 21 12:41:28.042052 containerd[1520]: time="2025-03-21T12:41:28.042005962Z" level=info msg="connecting to shim d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed" address="unix:///run/containerd/s/13fd4f9cd8a7b181207a915a6f1c5246a778da224cd5930673d4c61283d26b45" protocol=ttrpc version=3 Mar 21 12:41:28.066475 systemd[1]: Started cri-containerd-d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed.scope - libcontainer container d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed. Mar 21 12:41:28.109260 containerd[1520]: time="2025-03-21T12:41:28.109221491Z" level=info msg="StartContainer for \"d43b710e981ae9ef2ed9135b8277c3a25867c5e24788881084c27567c24af9ed\" returns successfully" Mar 21 12:41:28.358670 containerd[1520]: time="2025-03-21T12:41:28.358591619Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:28.360371 containerd[1520]: time="2025-03-21T12:41:28.359808252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 21 12:41:28.362876 containerd[1520]: time="2025-03-21T12:41:28.362846749Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 451.925578ms" Mar 21 12:41:28.362876 containerd[1520]: time="2025-03-21T12:41:28.362874964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 21 12:41:28.365666 containerd[1520]: time="2025-03-21T12:41:28.365507979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 21 12:41:28.367762 containerd[1520]: time="2025-03-21T12:41:28.367727408Z" level=info msg="CreateContainer within sandbox \"555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 21 12:41:28.376369 containerd[1520]: time="2025-03-21T12:41:28.376110969Z" level=info msg="Container e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:28.386206 containerd[1520]: time="2025-03-21T12:41:28.386159238Z" level=info msg="CreateContainer within sandbox \"555a6c69dfc81ec8fff831ab1d0f8d1dcbb9fb893c1afde587969b96a63eaf07\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f\"" Mar 21 12:41:28.386722 containerd[1520]: time="2025-03-21T12:41:28.386696094Z" level=info msg="StartContainer for \"e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f\"" Mar 21 12:41:28.388087 containerd[1520]: time="2025-03-21T12:41:28.388054935Z" level=info msg="connecting to shim e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f" address="unix:///run/containerd/s/f76640d2b438b85c25d8793a7191448d1cd48bbf872c3f1c82a0858ab3e65de7" protocol=ttrpc version=3 Mar 21 12:41:28.421607 systemd[1]: Started cri-containerd-e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f.scope - libcontainer container e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f. Mar 21 12:41:28.485651 containerd[1520]: time="2025-03-21T12:41:28.485582085Z" level=info msg="StartContainer for \"e6451211301550f544b7a9da9eb0b1de12465f09890a9e34c797192de6a7d78f\" returns successfully" Mar 21 12:41:28.581724 systemd[1]: Started sshd@13-10.0.0.131:22-10.0.0.1:53954.service - OpenSSH per-connection server daemon (10.0.0.1:53954). Mar 21 12:41:28.646176 sshd[4935]: Accepted publickey for core from 10.0.0.1 port 53954 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:28.648229 sshd-session[4935]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:28.653751 systemd-logind[1497]: New session 14 of user core. Mar 21 12:41:28.660468 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 21 12:41:28.783617 kubelet[2756]: I0321 12:41:28.783543 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-759b875587-752vj" podStartSLOduration=24.833076261 podStartE2EDuration="28.783526294s" podCreationTimestamp="2025-03-21 12:41:00 +0000 UTC" firstStartedPulling="2025-03-21 12:41:24.413180925 +0000 UTC m=+44.879220919" lastFinishedPulling="2025-03-21 12:41:28.363630958 +0000 UTC m=+48.829670952" observedRunningTime="2025-03-21 12:41:28.77283966 +0000 UTC m=+49.238879644" watchObservedRunningTime="2025-03-21 12:41:28.783526294 +0000 UTC m=+49.249566288" Mar 21 12:41:28.808370 sshd[4937]: Connection closed by 10.0.0.1 port 53954 Mar 21 12:41:28.808817 sshd-session[4935]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:28.814677 systemd[1]: sshd@13-10.0.0.131:22-10.0.0.1:53954.service: Deactivated successfully. Mar 21 12:41:28.817440 systemd[1]: session-14.scope: Deactivated successfully. Mar 21 12:41:28.819467 systemd-logind[1497]: Session 14 logged out. Waiting for processes to exit. Mar 21 12:41:28.820657 systemd-logind[1497]: Removed session 14. Mar 21 12:41:29.760868 kubelet[2756]: I0321 12:41:29.760809 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:29.761164 kubelet[2756]: I0321 12:41:29.761038 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:30.851625 containerd[1520]: time="2025-03-21T12:41:30.851563374Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:30.852412 containerd[1520]: time="2025-03-21T12:41:30.852373592Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 21 12:41:30.853670 containerd[1520]: time="2025-03-21T12:41:30.853624810Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:30.855690 containerd[1520]: time="2025-03-21T12:41:30.855647159Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:30.856129 containerd[1520]: time="2025-03-21T12:41:30.856096324Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.489140662s" Mar 21 12:41:30.856161 containerd[1520]: time="2025-03-21T12:41:30.856127184Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 21 12:41:30.856894 containerd[1520]: time="2025-03-21T12:41:30.856857326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 21 12:41:30.866199 containerd[1520]: time="2025-03-21T12:41:30.866162603Z" level=info msg="CreateContainer within sandbox \"df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 21 12:41:30.876053 containerd[1520]: time="2025-03-21T12:41:30.876015717Z" level=info msg="Container 89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:30.884799 containerd[1520]: time="2025-03-21T12:41:30.884765532Z" level=info msg="CreateContainer within sandbox \"df1cc7e382fe865e26afa1293f95db22b5797a5ac4f1ee73a540909660519292\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2\"" Mar 21 12:41:30.885290 containerd[1520]: time="2025-03-21T12:41:30.885257420Z" level=info msg="StartContainer for \"89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2\"" Mar 21 12:41:30.886506 containerd[1520]: time="2025-03-21T12:41:30.886467458Z" level=info msg="connecting to shim 89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2" address="unix:///run/containerd/s/9e3a74a376c1ac63f002d0ac23b41516aadd96ed45e8d7a8ab391b72ef70b94b" protocol=ttrpc version=3 Mar 21 12:41:30.917529 systemd[1]: Started cri-containerd-89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2.scope - libcontainer container 89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2. Mar 21 12:41:30.975870 containerd[1520]: time="2025-03-21T12:41:30.975830288Z" level=info msg="StartContainer for \"89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2\" returns successfully" Mar 21 12:41:31.774780 kubelet[2756]: I0321 12:41:31.774277 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-759b875587-qqd6m" podStartSLOduration=28.272868236 podStartE2EDuration="31.774259404s" podCreationTimestamp="2025-03-21 12:41:00 +0000 UTC" firstStartedPulling="2025-03-21 12:41:24.409372314 +0000 UTC m=+44.875412308" lastFinishedPulling="2025-03-21 12:41:27.910763482 +0000 UTC m=+48.376803476" observedRunningTime="2025-03-21 12:41:28.784271848 +0000 UTC m=+49.250311842" watchObservedRunningTime="2025-03-21 12:41:31.774259404 +0000 UTC m=+52.240299408" Mar 21 12:41:31.774780 kubelet[2756]: I0321 12:41:31.774440 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-78d6c95c47-bvbbf" podStartSLOduration=25.362870694 podStartE2EDuration="31.774435717s" podCreationTimestamp="2025-03-21 12:41:00 +0000 UTC" firstStartedPulling="2025-03-21 12:41:24.445192689 +0000 UTC m=+44.911232683" lastFinishedPulling="2025-03-21 12:41:30.856757712 +0000 UTC m=+51.322797706" observedRunningTime="2025-03-21 12:41:31.773918159 +0000 UTC m=+52.239958163" watchObservedRunningTime="2025-03-21 12:41:31.774435717 +0000 UTC m=+52.240475711" Mar 21 12:41:31.818618 containerd[1520]: time="2025-03-21T12:41:31.818564472Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2\" id:\"936108f39b0f2fb5bb8f514e44d52f59e644444bb15760a091a01d6cb26d3e63\" pid:5007 exited_at:{seconds:1742560891 nanos:818284507}" Mar 21 12:41:32.251673 containerd[1520]: time="2025-03-21T12:41:32.251615622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:32.252350 containerd[1520]: time="2025-03-21T12:41:32.252271278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 21 12:41:32.253368 containerd[1520]: time="2025-03-21T12:41:32.253325279Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:32.255269 containerd[1520]: time="2025-03-21T12:41:32.255243441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:32.255689 containerd[1520]: time="2025-03-21T12:41:32.255664500Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.398774992s" Mar 21 12:41:32.255725 containerd[1520]: time="2025-03-21T12:41:32.255690070Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 21 12:41:32.257372 containerd[1520]: time="2025-03-21T12:41:32.257322888Z" level=info msg="CreateContainer within sandbox \"8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 21 12:41:32.271427 containerd[1520]: time="2025-03-21T12:41:32.271378438Z" level=info msg="Container 42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:32.278925 containerd[1520]: time="2025-03-21T12:41:32.278891808Z" level=info msg="CreateContainer within sandbox \"8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72\"" Mar 21 12:41:32.279377 containerd[1520]: time="2025-03-21T12:41:32.279352575Z" level=info msg="StartContainer for \"42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72\"" Mar 21 12:41:32.280620 containerd[1520]: time="2025-03-21T12:41:32.280596355Z" level=info msg="connecting to shim 42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72" address="unix:///run/containerd/s/c06223496e45f33e33d3113b108d1778571c87ee71acef69de3badc89cecc092" protocol=ttrpc version=3 Mar 21 12:41:32.310484 systemd[1]: Started cri-containerd-42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72.scope - libcontainer container 42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72. Mar 21 12:41:32.420549 containerd[1520]: time="2025-03-21T12:41:32.420508780Z" level=info msg="StartContainer for \"42cbf7eaa498e3740b84349c2f9c0cb002de2f47f640a243f7a796bff7389b72\" returns successfully" Mar 21 12:41:32.421467 containerd[1520]: time="2025-03-21T12:41:32.421436315Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 21 12:41:33.823977 systemd[1]: Started sshd@14-10.0.0.131:22-10.0.0.1:53966.service - OpenSSH per-connection server daemon (10.0.0.1:53966). Mar 21 12:41:33.876066 sshd[5060]: Accepted publickey for core from 10.0.0.1 port 53966 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:33.877789 sshd-session[5060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:33.882316 systemd-logind[1497]: New session 15 of user core. Mar 21 12:41:33.892462 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 21 12:41:34.022320 sshd[5062]: Connection closed by 10.0.0.1 port 53966 Mar 21 12:41:34.022640 sshd-session[5060]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:34.026550 systemd[1]: sshd@14-10.0.0.131:22-10.0.0.1:53966.service: Deactivated successfully. Mar 21 12:41:34.028494 systemd[1]: session-15.scope: Deactivated successfully. Mar 21 12:41:34.029247 systemd-logind[1497]: Session 15 logged out. Waiting for processes to exit. Mar 21 12:41:34.030127 systemd-logind[1497]: Removed session 15. Mar 21 12:41:34.156574 containerd[1520]: time="2025-03-21T12:41:34.156513472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:34.157431 containerd[1520]: time="2025-03-21T12:41:34.157390186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 21 12:41:34.158728 containerd[1520]: time="2025-03-21T12:41:34.158683520Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:34.160979 containerd[1520]: time="2025-03-21T12:41:34.160766740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:41:34.161324 containerd[1520]: time="2025-03-21T12:41:34.161289015Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.739820949s" Mar 21 12:41:34.161324 containerd[1520]: time="2025-03-21T12:41:34.161319665Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 21 12:41:34.163072 containerd[1520]: time="2025-03-21T12:41:34.163051041Z" level=info msg="CreateContainer within sandbox \"8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 21 12:41:34.170575 containerd[1520]: time="2025-03-21T12:41:34.170536383Z" level=info msg="Container 0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:41:34.180156 containerd[1520]: time="2025-03-21T12:41:34.180124221Z" level=info msg="CreateContainer within sandbox \"8dd0d158fb6c77f28d85773600063d676c316bf3a5378665e165cc437e4fddaa\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7\"" Mar 21 12:41:34.180578 containerd[1520]: time="2025-03-21T12:41:34.180511986Z" level=info msg="StartContainer for \"0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7\"" Mar 21 12:41:34.181825 containerd[1520]: time="2025-03-21T12:41:34.181800330Z" level=info msg="connecting to shim 0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7" address="unix:///run/containerd/s/c06223496e45f33e33d3113b108d1778571c87ee71acef69de3badc89cecc092" protocol=ttrpc version=3 Mar 21 12:41:34.217520 systemd[1]: Started cri-containerd-0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7.scope - libcontainer container 0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7. Mar 21 12:41:34.338206 containerd[1520]: time="2025-03-21T12:41:34.338170348Z" level=info msg="StartContainer for \"0f34159bd28907a3039915412d9bfb116d9a8b56e7cd2f144f92adeb328faae7\" returns successfully" Mar 21 12:41:34.698635 kubelet[2756]: I0321 12:41:34.698584 2756 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 21 12:41:34.698635 kubelet[2756]: I0321 12:41:34.698630 2756 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 21 12:41:34.785416 kubelet[2756]: I0321 12:41:34.785357 2756 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-w4l5h" podStartSLOduration=25.531115409999998 podStartE2EDuration="34.785299612s" podCreationTimestamp="2025-03-21 12:41:00 +0000 UTC" firstStartedPulling="2025-03-21 12:41:24.907768965 +0000 UTC m=+45.373808959" lastFinishedPulling="2025-03-21 12:41:34.161953167 +0000 UTC m=+54.627993161" observedRunningTime="2025-03-21 12:41:34.784473326 +0000 UTC m=+55.250513320" watchObservedRunningTime="2025-03-21 12:41:34.785299612 +0000 UTC m=+55.251339606" Mar 21 12:41:39.035226 systemd[1]: Started sshd@15-10.0.0.131:22-10.0.0.1:33638.service - OpenSSH per-connection server daemon (10.0.0.1:33638). Mar 21 12:41:39.093657 sshd[5112]: Accepted publickey for core from 10.0.0.1 port 33638 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:39.095489 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:39.099693 systemd-logind[1497]: New session 16 of user core. Mar 21 12:41:39.110466 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 21 12:41:39.249991 sshd[5114]: Connection closed by 10.0.0.1 port 33638 Mar 21 12:41:39.250238 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:39.255530 systemd[1]: sshd@15-10.0.0.131:22-10.0.0.1:33638.service: Deactivated successfully. Mar 21 12:41:39.257864 systemd[1]: session-16.scope: Deactivated successfully. Mar 21 12:41:39.258620 systemd-logind[1497]: Session 16 logged out. Waiting for processes to exit. Mar 21 12:41:39.259485 systemd-logind[1497]: Removed session 16. Mar 21 12:41:41.450606 containerd[1520]: time="2025-03-21T12:41:41.450558600Z" level=info msg="TaskExit event in podsandbox handler container_id:\"89333bf71ebff112ad683cf04b8ad75baa116008f68be36e5aac31330a45bff2\" id:\"b11ccbdb31b039c77d891f1d5f4656ea5dcb1442b97b29f6fa7d13d673dda8b8\" pid:5142 exited_at:{seconds:1742560901 nanos:450373061}" Mar 21 12:41:44.265254 systemd[1]: Started sshd@16-10.0.0.131:22-10.0.0.1:59128.service - OpenSSH per-connection server daemon (10.0.0.1:59128). Mar 21 12:41:44.313323 sshd[5159]: Accepted publickey for core from 10.0.0.1 port 59128 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:44.314793 sshd-session[5159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:44.319619 systemd-logind[1497]: New session 17 of user core. Mar 21 12:41:44.329494 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 21 12:41:44.437250 sshd[5161]: Connection closed by 10.0.0.1 port 59128 Mar 21 12:41:44.437821 sshd-session[5159]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:44.449620 systemd[1]: sshd@16-10.0.0.131:22-10.0.0.1:59128.service: Deactivated successfully. Mar 21 12:41:44.451710 systemd[1]: session-17.scope: Deactivated successfully. Mar 21 12:41:44.453491 systemd-logind[1497]: Session 17 logged out. Waiting for processes to exit. Mar 21 12:41:44.454800 systemd[1]: Started sshd@17-10.0.0.131:22-10.0.0.1:59132.service - OpenSSH per-connection server daemon (10.0.0.1:59132). Mar 21 12:41:44.456230 systemd-logind[1497]: Removed session 17. Mar 21 12:41:44.503676 sshd[5173]: Accepted publickey for core from 10.0.0.1 port 59132 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:44.505237 sshd-session[5173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:44.509978 systemd-logind[1497]: New session 18 of user core. Mar 21 12:41:44.519502 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 21 12:41:44.799652 sshd[5176]: Connection closed by 10.0.0.1 port 59132 Mar 21 12:41:44.800199 sshd-session[5173]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:44.810300 systemd[1]: sshd@17-10.0.0.131:22-10.0.0.1:59132.service: Deactivated successfully. Mar 21 12:41:44.812474 systemd[1]: session-18.scope: Deactivated successfully. Mar 21 12:41:44.814185 systemd-logind[1497]: Session 18 logged out. Waiting for processes to exit. Mar 21 12:41:44.815735 systemd[1]: Started sshd@18-10.0.0.131:22-10.0.0.1:59148.service - OpenSSH per-connection server daemon (10.0.0.1:59148). Mar 21 12:41:44.816697 systemd-logind[1497]: Removed session 18. Mar 21 12:41:44.861055 sshd[5189]: Accepted publickey for core from 10.0.0.1 port 59148 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:44.862512 sshd-session[5189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:44.867026 systemd-logind[1497]: New session 19 of user core. Mar 21 12:41:44.881517 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 21 12:41:46.404766 sshd[5192]: Connection closed by 10.0.0.1 port 59148 Mar 21 12:41:46.406260 sshd-session[5189]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:46.420095 systemd[1]: sshd@18-10.0.0.131:22-10.0.0.1:59148.service: Deactivated successfully. Mar 21 12:41:46.421977 systemd[1]: session-19.scope: Deactivated successfully. Mar 21 12:41:46.422218 systemd[1]: session-19.scope: Consumed 577ms CPU time, 66.3M memory peak. Mar 21 12:41:46.424413 systemd-logind[1497]: Session 19 logged out. Waiting for processes to exit. Mar 21 12:41:46.426104 systemd[1]: Started sshd@19-10.0.0.131:22-10.0.0.1:59150.service - OpenSSH per-connection server daemon (10.0.0.1:59150). Mar 21 12:41:46.426821 systemd-logind[1497]: Removed session 19. Mar 21 12:41:46.483010 sshd[5212]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:46.484467 sshd-session[5212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:46.488781 systemd-logind[1497]: New session 20 of user core. Mar 21 12:41:46.500438 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 21 12:41:46.704661 sshd[5215]: Connection closed by 10.0.0.1 port 59150 Mar 21 12:41:46.705274 sshd-session[5212]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:46.714318 systemd[1]: sshd@19-10.0.0.131:22-10.0.0.1:59150.service: Deactivated successfully. Mar 21 12:41:46.716432 systemd[1]: session-20.scope: Deactivated successfully. Mar 21 12:41:46.718023 systemd-logind[1497]: Session 20 logged out. Waiting for processes to exit. Mar 21 12:41:46.719480 systemd[1]: Started sshd@20-10.0.0.131:22-10.0.0.1:59158.service - OpenSSH per-connection server daemon (10.0.0.1:59158). Mar 21 12:41:46.720202 systemd-logind[1497]: Removed session 20. Mar 21 12:41:46.767182 sshd[5225]: Accepted publickey for core from 10.0.0.1 port 59158 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:46.768570 sshd-session[5225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:46.772792 systemd-logind[1497]: New session 21 of user core. Mar 21 12:41:46.778456 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 21 12:41:46.887799 sshd[5228]: Connection closed by 10.0.0.1 port 59158 Mar 21 12:41:46.888139 sshd-session[5225]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:46.892737 systemd[1]: sshd@20-10.0.0.131:22-10.0.0.1:59158.service: Deactivated successfully. Mar 21 12:41:46.894972 systemd[1]: session-21.scope: Deactivated successfully. Mar 21 12:41:46.895950 systemd-logind[1497]: Session 21 logged out. Waiting for processes to exit. Mar 21 12:41:46.896968 systemd-logind[1497]: Removed session 21. Mar 21 12:41:49.663998 kubelet[2756]: I0321 12:41:49.663948 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:41:51.900296 systemd[1]: Started sshd@21-10.0.0.131:22-10.0.0.1:59164.service - OpenSSH per-connection server daemon (10.0.0.1:59164). Mar 21 12:41:51.945320 sshd[5247]: Accepted publickey for core from 10.0.0.1 port 59164 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:51.946862 sshd-session[5247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:51.950813 systemd-logind[1497]: New session 22 of user core. Mar 21 12:41:51.959441 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 21 12:41:52.063634 sshd[5249]: Connection closed by 10.0.0.1 port 59164 Mar 21 12:41:52.063985 sshd-session[5247]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:52.067807 systemd[1]: sshd@21-10.0.0.131:22-10.0.0.1:59164.service: Deactivated successfully. Mar 21 12:41:52.069815 systemd[1]: session-22.scope: Deactivated successfully. Mar 21 12:41:52.070594 systemd-logind[1497]: Session 22 logged out. Waiting for processes to exit. Mar 21 12:41:52.071696 systemd-logind[1497]: Removed session 22. Mar 21 12:41:53.070029 containerd[1520]: time="2025-03-21T12:41:53.069980369Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c3cfde050db0508a677b9d5b16b2f8578bc0d043d22a1c49691888c4f14bb94d\" id:\"a6c21f419958dc84f10e2d3d2e3316f07782c24e76c7fb671c88e5aaed1f13e9\" pid:5272 exited_at:{seconds:1742560913 nanos:69661390}" Mar 21 12:41:57.082320 systemd[1]: Started sshd@22-10.0.0.131:22-10.0.0.1:58422.service - OpenSSH per-connection server daemon (10.0.0.1:58422). Mar 21 12:41:57.129946 sshd[5288]: Accepted publickey for core from 10.0.0.1 port 58422 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:41:57.131616 sshd-session[5288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:41:57.136001 systemd-logind[1497]: New session 23 of user core. Mar 21 12:41:57.145567 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 21 12:41:57.264406 sshd[5290]: Connection closed by 10.0.0.1 port 58422 Mar 21 12:41:57.264994 sshd-session[5288]: pam_unix(sshd:session): session closed for user core Mar 21 12:41:57.269297 systemd[1]: sshd@22-10.0.0.131:22-10.0.0.1:58422.service: Deactivated successfully. Mar 21 12:41:57.271664 systemd[1]: session-23.scope: Deactivated successfully. Mar 21 12:41:57.272551 systemd-logind[1497]: Session 23 logged out. Waiting for processes to exit. Mar 21 12:41:57.273917 systemd-logind[1497]: Removed session 23. Mar 21 12:42:01.119827 kubelet[2756]: I0321 12:42:01.119776 2756 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:42:02.278355 systemd[1]: Started sshd@23-10.0.0.131:22-10.0.0.1:58428.service - OpenSSH per-connection server daemon (10.0.0.1:58428). Mar 21 12:42:02.332467 sshd[5306]: Accepted publickey for core from 10.0.0.1 port 58428 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:42:02.334012 sshd-session[5306]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:42:02.338373 systemd-logind[1497]: New session 24 of user core. Mar 21 12:42:02.346474 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 21 12:42:02.455841 sshd[5308]: Connection closed by 10.0.0.1 port 58428 Mar 21 12:42:02.456255 sshd-session[5306]: pam_unix(sshd:session): session closed for user core Mar 21 12:42:02.460812 systemd[1]: sshd@23-10.0.0.131:22-10.0.0.1:58428.service: Deactivated successfully. Mar 21 12:42:02.463042 systemd[1]: session-24.scope: Deactivated successfully. Mar 21 12:42:02.463858 systemd-logind[1497]: Session 24 logged out. Waiting for processes to exit. Mar 21 12:42:02.464733 systemd-logind[1497]: Removed session 24.