Mar 21 12:36:38.877823 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Mar 21 10:52:59 -00 2025 Mar 21 12:36:38.877844 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:36:38.877853 kernel: BIOS-provided physical RAM map: Mar 21 12:36:38.877860 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Mar 21 12:36:38.877866 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Mar 21 12:36:38.877876 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Mar 21 12:36:38.877883 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Mar 21 12:36:38.877890 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Mar 21 12:36:38.877897 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Mar 21 12:36:38.877904 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Mar 21 12:36:38.877911 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Mar 21 12:36:38.877917 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Mar 21 12:36:38.877924 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Mar 21 12:36:38.877931 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Mar 21 12:36:38.877941 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Mar 21 12:36:38.877949 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Mar 21 12:36:38.877956 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 21 12:36:38.877963 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 12:36:38.877970 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 12:36:38.877980 kernel: NX (Execute Disable) protection: active Mar 21 12:36:38.877987 kernel: APIC: Static calls initialized Mar 21 12:36:38.877994 kernel: e820: update [mem 0x9a187018-0x9a190c57] usable ==> usable Mar 21 12:36:38.878002 kernel: e820: update [mem 0x9a187018-0x9a190c57] usable ==> usable Mar 21 12:36:38.878009 kernel: e820: update [mem 0x9a14a018-0x9a186e57] usable ==> usable Mar 21 12:36:38.878016 kernel: e820: update [mem 0x9a14a018-0x9a186e57] usable ==> usable Mar 21 12:36:38.878022 kernel: extended physical RAM map: Mar 21 12:36:38.878030 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Mar 21 12:36:38.878037 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Mar 21 12:36:38.878044 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Mar 21 12:36:38.878051 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Mar 21 12:36:38.878061 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a14a017] usable Mar 21 12:36:38.878068 kernel: reserve setup_data: [mem 0x000000009a14a018-0x000000009a186e57] usable Mar 21 12:36:38.878075 kernel: reserve setup_data: [mem 0x000000009a186e58-0x000000009a187017] usable Mar 21 12:36:38.878082 kernel: reserve setup_data: [mem 0x000000009a187018-0x000000009a190c57] usable Mar 21 12:36:38.878089 kernel: reserve setup_data: [mem 0x000000009a190c58-0x000000009b8ecfff] usable Mar 21 12:36:38.878096 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Mar 21 12:36:38.878103 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Mar 21 12:36:38.878110 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Mar 21 12:36:38.878117 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Mar 21 12:36:38.878125 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Mar 21 12:36:38.878138 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Mar 21 12:36:38.878145 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Mar 21 12:36:38.878153 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Mar 21 12:36:38.878160 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Mar 21 12:36:38.878168 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 21 12:36:38.878175 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Mar 21 12:36:38.878185 kernel: efi: EFI v2.7 by EDK II Mar 21 12:36:38.878192 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1f7018 RNG=0x9bb73018 Mar 21 12:36:38.878200 kernel: random: crng init done Mar 21 12:36:38.878207 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Mar 21 12:36:38.878215 kernel: secureboot: Secure boot enabled Mar 21 12:36:38.878222 kernel: SMBIOS 2.8 present. Mar 21 12:36:38.878244 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Mar 21 12:36:38.878252 kernel: Hypervisor detected: KVM Mar 21 12:36:38.878259 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 21 12:36:38.878267 kernel: kvm-clock: using sched offset of 3852297046 cycles Mar 21 12:36:38.878275 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 21 12:36:38.878286 kernel: tsc: Detected 2794.748 MHz processor Mar 21 12:36:38.878294 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 21 12:36:38.878301 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 21 12:36:38.878309 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Mar 21 12:36:38.878317 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Mar 21 12:36:38.878325 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 21 12:36:38.878332 kernel: Using GB pages for direct mapping Mar 21 12:36:38.878340 kernel: ACPI: Early table checksum verification disabled Mar 21 12:36:38.878347 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Mar 21 12:36:38.878358 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Mar 21 12:36:38.878365 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:36:38.878373 kernel: ACPI: DSDT 0x000000009BB7A000 002225 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:36:38.878381 kernel: ACPI: FACS 0x000000009BBDD000 000040 Mar 21 12:36:38.878388 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:36:38.878396 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:36:38.878403 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:36:38.878411 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 21 12:36:38.878419 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Mar 21 12:36:38.878429 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Mar 21 12:36:38.878437 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c224] Mar 21 12:36:38.878444 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Mar 21 12:36:38.878452 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Mar 21 12:36:38.878460 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Mar 21 12:36:38.878467 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Mar 21 12:36:38.878475 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Mar 21 12:36:38.878482 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Mar 21 12:36:38.878490 kernel: No NUMA configuration found Mar 21 12:36:38.878500 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Mar 21 12:36:38.878508 kernel: NODE_DATA(0) allocated [mem 0x9bf59000-0x9bf5efff] Mar 21 12:36:38.878516 kernel: Zone ranges: Mar 21 12:36:38.878523 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 21 12:36:38.878531 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Mar 21 12:36:38.878538 kernel: Normal empty Mar 21 12:36:38.878546 kernel: Movable zone start for each node Mar 21 12:36:38.878554 kernel: Early memory node ranges Mar 21 12:36:38.878561 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Mar 21 12:36:38.878569 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Mar 21 12:36:38.878579 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Mar 21 12:36:38.878586 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Mar 21 12:36:38.878594 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Mar 21 12:36:38.878602 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Mar 21 12:36:38.878609 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 21 12:36:38.878617 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Mar 21 12:36:38.878624 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 21 12:36:38.878632 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 21 12:36:38.878639 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Mar 21 12:36:38.878650 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Mar 21 12:36:38.878657 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 21 12:36:38.878671 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 21 12:36:38.878679 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 21 12:36:38.878687 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 21 12:36:38.878694 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 21 12:36:38.878702 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 21 12:36:38.878710 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 21 12:36:38.878717 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 21 12:36:38.878728 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 21 12:36:38.878735 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 21 12:36:38.878743 kernel: TSC deadline timer available Mar 21 12:36:38.878750 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Mar 21 12:36:38.878758 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 21 12:36:38.878766 kernel: kvm-guest: KVM setup pv remote TLB flush Mar 21 12:36:38.878782 kernel: kvm-guest: setup PV sched yield Mar 21 12:36:38.878792 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Mar 21 12:36:38.878800 kernel: Booting paravirtualized kernel on KVM Mar 21 12:36:38.878808 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 21 12:36:38.878816 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Mar 21 12:36:38.878824 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Mar 21 12:36:38.878835 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Mar 21 12:36:38.878842 kernel: pcpu-alloc: [0] 0 1 2 3 Mar 21 12:36:38.878850 kernel: kvm-guest: PV spinlocks enabled Mar 21 12:36:38.878858 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 21 12:36:38.878867 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:36:38.878875 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 21 12:36:38.878883 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 21 12:36:38.878891 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 21 12:36:38.878902 kernel: Fallback order for Node 0: 0 Mar 21 12:36:38.878910 kernel: Built 1 zonelists, mobility grouping on. Total pages: 625927 Mar 21 12:36:38.878918 kernel: Policy zone: DMA32 Mar 21 12:36:38.878926 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 21 12:36:38.878934 kernel: Memory: 2368304K/2552216K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43588K init, 1476K bss, 183656K reserved, 0K cma-reserved) Mar 21 12:36:38.878945 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Mar 21 12:36:38.878962 kernel: ftrace: allocating 37985 entries in 149 pages Mar 21 12:36:38.878971 kernel: ftrace: allocated 149 pages with 4 groups Mar 21 12:36:38.878979 kernel: Dynamic Preempt: voluntary Mar 21 12:36:38.878987 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 21 12:36:38.878996 kernel: rcu: RCU event tracing is enabled. Mar 21 12:36:38.879019 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Mar 21 12:36:38.879028 kernel: Trampoline variant of Tasks RCU enabled. Mar 21 12:36:38.879049 kernel: Rude variant of Tasks RCU enabled. Mar 21 12:36:38.879068 kernel: Tracing variant of Tasks RCU enabled. Mar 21 12:36:38.879076 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 21 12:36:38.879097 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Mar 21 12:36:38.879106 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Mar 21 12:36:38.879114 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 21 12:36:38.879122 kernel: Console: colour dummy device 80x25 Mar 21 12:36:38.879130 kernel: printk: console [ttyS0] enabled Mar 21 12:36:38.879137 kernel: ACPI: Core revision 20230628 Mar 21 12:36:38.879146 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 21 12:36:38.879157 kernel: APIC: Switch to symmetric I/O mode setup Mar 21 12:36:38.879164 kernel: x2apic enabled Mar 21 12:36:38.879172 kernel: APIC: Switched APIC routing to: physical x2apic Mar 21 12:36:38.879180 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Mar 21 12:36:38.879188 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Mar 21 12:36:38.879197 kernel: kvm-guest: setup PV IPIs Mar 21 12:36:38.879204 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 21 12:36:38.879212 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Mar 21 12:36:38.879220 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Mar 21 12:36:38.879266 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Mar 21 12:36:38.879274 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Mar 21 12:36:38.879282 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Mar 21 12:36:38.879290 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 21 12:36:38.879298 kernel: Spectre V2 : Mitigation: Retpolines Mar 21 12:36:38.879306 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 21 12:36:38.879314 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 21 12:36:38.879322 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Mar 21 12:36:38.879330 kernel: RETBleed: Mitigation: untrained return thunk Mar 21 12:36:38.879341 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 21 12:36:38.879349 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 21 12:36:38.879357 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Mar 21 12:36:38.879365 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Mar 21 12:36:38.879373 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Mar 21 12:36:38.879381 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 21 12:36:38.879389 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 21 12:36:38.879397 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 21 12:36:38.879407 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 21 12:36:38.879415 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Mar 21 12:36:38.879423 kernel: Freeing SMP alternatives memory: 32K Mar 21 12:36:38.879431 kernel: pid_max: default: 32768 minimum: 301 Mar 21 12:36:38.879439 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 21 12:36:38.879447 kernel: landlock: Up and running. Mar 21 12:36:38.879455 kernel: SELinux: Initializing. Mar 21 12:36:38.879463 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:36:38.879471 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Mar 21 12:36:38.879481 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Mar 21 12:36:38.879489 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:36:38.879497 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:36:38.879505 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Mar 21 12:36:38.879513 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Mar 21 12:36:38.879521 kernel: ... version: 0 Mar 21 12:36:38.879529 kernel: ... bit width: 48 Mar 21 12:36:38.879537 kernel: ... generic registers: 6 Mar 21 12:36:38.879545 kernel: ... value mask: 0000ffffffffffff Mar 21 12:36:38.879555 kernel: ... max period: 00007fffffffffff Mar 21 12:36:38.879563 kernel: ... fixed-purpose events: 0 Mar 21 12:36:38.879571 kernel: ... event mask: 000000000000003f Mar 21 12:36:38.879579 kernel: signal: max sigframe size: 1776 Mar 21 12:36:38.879587 kernel: rcu: Hierarchical SRCU implementation. Mar 21 12:36:38.879595 kernel: rcu: Max phase no-delay instances is 400. Mar 21 12:36:38.879603 kernel: smp: Bringing up secondary CPUs ... Mar 21 12:36:38.879611 kernel: smpboot: x86: Booting SMP configuration: Mar 21 12:36:38.879618 kernel: .... node #0, CPUs: #1 #2 #3 Mar 21 12:36:38.879629 kernel: smp: Brought up 1 node, 4 CPUs Mar 21 12:36:38.879637 kernel: smpboot: Max logical packages: 1 Mar 21 12:36:38.879645 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Mar 21 12:36:38.879652 kernel: devtmpfs: initialized Mar 21 12:36:38.879666 kernel: x86/mm: Memory block size: 128MB Mar 21 12:36:38.879674 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Mar 21 12:36:38.879682 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Mar 21 12:36:38.879690 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 21 12:36:38.879698 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Mar 21 12:36:38.879709 kernel: pinctrl core: initialized pinctrl subsystem Mar 21 12:36:38.879717 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 21 12:36:38.879725 kernel: audit: initializing netlink subsys (disabled) Mar 21 12:36:38.879733 kernel: audit: type=2000 audit(1742560598.031:1): state=initialized audit_enabled=0 res=1 Mar 21 12:36:38.879740 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 21 12:36:38.879748 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 21 12:36:38.879756 kernel: cpuidle: using governor menu Mar 21 12:36:38.879764 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 21 12:36:38.879772 kernel: dca service started, version 1.12.1 Mar 21 12:36:38.879783 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Mar 21 12:36:38.879791 kernel: PCI: Using configuration type 1 for base access Mar 21 12:36:38.879799 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 21 12:36:38.879807 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 21 12:36:38.879815 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 21 12:36:38.879823 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 21 12:36:38.879830 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 21 12:36:38.879838 kernel: ACPI: Added _OSI(Module Device) Mar 21 12:36:38.879846 kernel: ACPI: Added _OSI(Processor Device) Mar 21 12:36:38.879856 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 21 12:36:38.879864 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 21 12:36:38.879872 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 21 12:36:38.879880 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 21 12:36:38.879888 kernel: ACPI: Interpreter enabled Mar 21 12:36:38.879896 kernel: ACPI: PM: (supports S0 S5) Mar 21 12:36:38.879903 kernel: ACPI: Using IOAPIC for interrupt routing Mar 21 12:36:38.879911 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 21 12:36:38.879919 kernel: PCI: Using E820 reservations for host bridge windows Mar 21 12:36:38.879930 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Mar 21 12:36:38.879938 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 21 12:36:38.880118 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Mar 21 12:36:38.880351 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Mar 21 12:36:38.880828 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Mar 21 12:36:38.880842 kernel: PCI host bridge to bus 0000:00 Mar 21 12:36:38.881002 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 21 12:36:38.881127 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 21 12:36:38.881259 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 21 12:36:38.881374 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Mar 21 12:36:38.881486 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Mar 21 12:36:38.881608 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Mar 21 12:36:38.881732 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 21 12:36:38.881874 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Mar 21 12:36:38.882020 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Mar 21 12:36:38.882143 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Mar 21 12:36:38.882289 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Mar 21 12:36:38.882417 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Mar 21 12:36:38.882541 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Mar 21 12:36:38.882673 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 21 12:36:38.882821 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Mar 21 12:36:38.882946 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Mar 21 12:36:38.883076 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Mar 21 12:36:38.883200 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Mar 21 12:36:38.883347 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Mar 21 12:36:38.883540 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Mar 21 12:36:38.883676 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Mar 21 12:36:38.883807 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Mar 21 12:36:38.883940 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 21 12:36:38.884065 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Mar 21 12:36:38.884188 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Mar 21 12:36:38.884329 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Mar 21 12:36:38.884453 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Mar 21 12:36:38.884592 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Mar 21 12:36:38.884734 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Mar 21 12:36:38.884868 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Mar 21 12:36:38.884992 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Mar 21 12:36:38.885117 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Mar 21 12:36:38.885265 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Mar 21 12:36:38.885475 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Mar 21 12:36:38.885488 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 21 12:36:38.885501 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 21 12:36:38.885509 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 21 12:36:38.885517 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 21 12:36:38.885525 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Mar 21 12:36:38.885533 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Mar 21 12:36:38.885541 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Mar 21 12:36:38.885549 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Mar 21 12:36:38.885557 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Mar 21 12:36:38.885567 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Mar 21 12:36:38.885575 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Mar 21 12:36:38.885583 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Mar 21 12:36:38.885591 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Mar 21 12:36:38.885599 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Mar 21 12:36:38.885607 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Mar 21 12:36:38.885615 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Mar 21 12:36:38.885623 kernel: iommu: Default domain type: Translated Mar 21 12:36:38.885630 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 21 12:36:38.885638 kernel: efivars: Registered efivars operations Mar 21 12:36:38.885649 kernel: PCI: Using ACPI for IRQ routing Mar 21 12:36:38.885657 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 21 12:36:38.885673 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Mar 21 12:36:38.885681 kernel: e820: reserve RAM buffer [mem 0x9a14a018-0x9bffffff] Mar 21 12:36:38.885689 kernel: e820: reserve RAM buffer [mem 0x9a187018-0x9bffffff] Mar 21 12:36:38.885697 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Mar 21 12:36:38.885705 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Mar 21 12:36:38.885831 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Mar 21 12:36:38.885957 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Mar 21 12:36:38.886079 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 21 12:36:38.886090 kernel: vgaarb: loaded Mar 21 12:36:38.886098 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 21 12:36:38.886106 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 21 12:36:38.886114 kernel: clocksource: Switched to clocksource kvm-clock Mar 21 12:36:38.886122 kernel: VFS: Disk quotas dquot_6.6.0 Mar 21 12:36:38.886130 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 21 12:36:38.886138 kernel: pnp: PnP ACPI init Mar 21 12:36:38.886304 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Mar 21 12:36:38.886317 kernel: pnp: PnP ACPI: found 6 devices Mar 21 12:36:38.886325 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 21 12:36:38.886333 kernel: NET: Registered PF_INET protocol family Mar 21 12:36:38.886341 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 21 12:36:38.886349 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Mar 21 12:36:38.886358 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 21 12:36:38.886366 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 21 12:36:38.886377 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Mar 21 12:36:38.886385 kernel: TCP: Hash tables configured (established 32768 bind 32768) Mar 21 12:36:38.886394 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:36:38.886402 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Mar 21 12:36:38.886410 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 21 12:36:38.886418 kernel: NET: Registered PF_XDP protocol family Mar 21 12:36:38.886545 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Mar 21 12:36:38.886677 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Mar 21 12:36:38.886796 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 21 12:36:38.886910 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 21 12:36:38.887023 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 21 12:36:38.887135 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Mar 21 12:36:38.887262 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Mar 21 12:36:38.887377 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Mar 21 12:36:38.887387 kernel: PCI: CLS 0 bytes, default 64 Mar 21 12:36:38.887395 kernel: Initialise system trusted keyrings Mar 21 12:36:38.887403 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Mar 21 12:36:38.887415 kernel: Key type asymmetric registered Mar 21 12:36:38.887423 kernel: Asymmetric key parser 'x509' registered Mar 21 12:36:38.887431 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 21 12:36:38.887439 kernel: io scheduler mq-deadline registered Mar 21 12:36:38.887447 kernel: io scheduler kyber registered Mar 21 12:36:38.887455 kernel: io scheduler bfq registered Mar 21 12:36:38.887463 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 21 12:36:38.887489 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Mar 21 12:36:38.887500 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Mar 21 12:36:38.887511 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Mar 21 12:36:38.887519 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 21 12:36:38.887527 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 21 12:36:38.887538 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 21 12:36:38.887546 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 21 12:36:38.887555 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 21 12:36:38.887563 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 21 12:36:38.887698 kernel: rtc_cmos 00:04: RTC can wake from S4 Mar 21 12:36:38.887821 kernel: rtc_cmos 00:04: registered as rtc0 Mar 21 12:36:38.887938 kernel: rtc_cmos 00:04: setting system clock to 2025-03-21T12:36:38 UTC (1742560598) Mar 21 12:36:38.888056 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Mar 21 12:36:38.888066 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Mar 21 12:36:38.888075 kernel: efifb: probing for efifb Mar 21 12:36:38.888083 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Mar 21 12:36:38.888091 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Mar 21 12:36:38.888099 kernel: efifb: scrolling: redraw Mar 21 12:36:38.888108 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Mar 21 12:36:38.888120 kernel: Console: switching to colour frame buffer device 160x50 Mar 21 12:36:38.888128 kernel: fb0: EFI VGA frame buffer device Mar 21 12:36:38.888137 kernel: pstore: Using crash dump compression: deflate Mar 21 12:36:38.888145 kernel: pstore: Registered efi_pstore as persistent store backend Mar 21 12:36:38.888153 kernel: NET: Registered PF_INET6 protocol family Mar 21 12:36:38.888161 kernel: Segment Routing with IPv6 Mar 21 12:36:38.888169 kernel: In-situ OAM (IOAM) with IPv6 Mar 21 12:36:38.888177 kernel: NET: Registered PF_PACKET protocol family Mar 21 12:36:38.888186 kernel: Key type dns_resolver registered Mar 21 12:36:38.888197 kernel: IPI shorthand broadcast: enabled Mar 21 12:36:38.888205 kernel: sched_clock: Marking stable (608003215, 133357364)->(753726135, -12365556) Mar 21 12:36:38.888213 kernel: registered taskstats version 1 Mar 21 12:36:38.888222 kernel: Loading compiled-in X.509 certificates Mar 21 12:36:38.888315 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: d76f2258ffed89096a9428010e5ac0a0babcea9e' Mar 21 12:36:38.888327 kernel: Key type .fscrypt registered Mar 21 12:36:38.888335 kernel: Key type fscrypt-provisioning registered Mar 21 12:36:38.888343 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 21 12:36:38.888351 kernel: ima: Allocated hash algorithm: sha1 Mar 21 12:36:38.888360 kernel: ima: No architecture policies found Mar 21 12:36:38.888368 kernel: clk: Disabling unused clocks Mar 21 12:36:38.888376 kernel: Freeing unused kernel image (initmem) memory: 43588K Mar 21 12:36:38.888385 kernel: Write protecting the kernel read-only data: 40960k Mar 21 12:36:38.888394 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 21 12:36:38.888404 kernel: Run /init as init process Mar 21 12:36:38.888415 kernel: with arguments: Mar 21 12:36:38.888423 kernel: /init Mar 21 12:36:38.888431 kernel: with environment: Mar 21 12:36:38.888439 kernel: HOME=/ Mar 21 12:36:38.888447 kernel: TERM=linux Mar 21 12:36:38.888455 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 21 12:36:38.888465 systemd[1]: Successfully made /usr/ read-only. Mar 21 12:36:38.888483 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:36:38.888496 systemd[1]: Detected virtualization kvm. Mar 21 12:36:38.888505 systemd[1]: Detected architecture x86-64. Mar 21 12:36:38.888513 systemd[1]: Running in initrd. Mar 21 12:36:38.888522 systemd[1]: No hostname configured, using default hostname. Mar 21 12:36:38.888531 systemd[1]: Hostname set to . Mar 21 12:36:38.888540 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:36:38.888548 systemd[1]: Queued start job for default target initrd.target. Mar 21 12:36:38.888560 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:36:38.888569 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:36:38.888578 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 21 12:36:38.888587 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:36:38.888596 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 21 12:36:38.888606 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 21 12:36:38.888616 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 21 12:36:38.888628 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 21 12:36:38.888637 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:36:38.888646 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:36:38.888654 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:36:38.888670 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:36:38.888678 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:36:38.888687 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:36:38.888696 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:36:38.888723 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:36:38.888733 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 21 12:36:38.888744 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 21 12:36:38.888756 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:36:38.888767 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:36:38.888776 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:36:38.888784 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:36:38.888793 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 21 12:36:38.888802 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:36:38.888813 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 21 12:36:38.888822 systemd[1]: Starting systemd-fsck-usr.service... Mar 21 12:36:38.888831 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:36:38.888839 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:36:38.888848 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:38.888857 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 21 12:36:38.888866 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:36:38.888903 systemd-journald[191]: Collecting audit messages is disabled. Mar 21 12:36:38.888925 systemd[1]: Finished systemd-fsck-usr.service. Mar 21 12:36:38.888935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 21 12:36:38.888944 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:38.888953 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:36:38.888962 systemd-journald[191]: Journal started Mar 21 12:36:38.888981 systemd-journald[191]: Runtime Journal (/run/log/journal/3da644135bf544bb94f3f38096d6e374) is 6M, max 47.9M, 41.9M free. Mar 21 12:36:38.892164 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:36:38.881617 systemd-modules-load[192]: Inserted module 'overlay' Mar 21 12:36:38.895433 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:36:38.896717 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 21 12:36:38.901347 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:36:38.913258 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 21 12:36:38.914953 systemd-modules-load[192]: Inserted module 'br_netfilter' Mar 21 12:36:38.915885 kernel: Bridge firewalling registered Mar 21 12:36:38.918480 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:36:38.919188 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:36:38.921375 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:36:38.923286 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 21 12:36:38.926905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:36:38.934565 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:36:38.938653 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:36:38.941194 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:36:38.945021 dracut-cmdline[223]: dracut-dracut-053 Mar 21 12:36:38.947934 dracut-cmdline[223]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=fb715041d083099c6a15c8aee7cc93fc3f3ca8764fc0aaaff245a06641d663d2 Mar 21 12:36:38.995589 systemd-resolved[234]: Positive Trust Anchors: Mar 21 12:36:38.995603 systemd-resolved[234]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:36:38.995634 systemd-resolved[234]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:36:38.998086 systemd-resolved[234]: Defaulting to hostname 'linux'. Mar 21 12:36:38.999128 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:36:39.004305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:36:39.042258 kernel: SCSI subsystem initialized Mar 21 12:36:39.051251 kernel: Loading iSCSI transport class v2.0-870. Mar 21 12:36:39.062256 kernel: iscsi: registered transport (tcp) Mar 21 12:36:39.082434 kernel: iscsi: registered transport (qla4xxx) Mar 21 12:36:39.082456 kernel: QLogic iSCSI HBA Driver Mar 21 12:36:39.132127 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 21 12:36:39.133796 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 21 12:36:39.166515 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 21 12:36:39.166567 kernel: device-mapper: uevent: version 1.0.3 Mar 21 12:36:39.167543 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 21 12:36:39.208260 kernel: raid6: avx2x4 gen() 30540 MB/s Mar 21 12:36:39.225259 kernel: raid6: avx2x2 gen() 31394 MB/s Mar 21 12:36:39.242335 kernel: raid6: avx2x1 gen() 25850 MB/s Mar 21 12:36:39.242368 kernel: raid6: using algorithm avx2x2 gen() 31394 MB/s Mar 21 12:36:39.260317 kernel: raid6: .... xor() 19987 MB/s, rmw enabled Mar 21 12:36:39.260338 kernel: raid6: using avx2x2 recovery algorithm Mar 21 12:36:39.280250 kernel: xor: automatically using best checksumming function avx Mar 21 12:36:39.428257 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 21 12:36:39.440739 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:36:39.443424 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:36:39.473203 systemd-udevd[413]: Using default interface naming scheme 'v255'. Mar 21 12:36:39.478519 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:36:39.481030 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 21 12:36:39.505382 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Mar 21 12:36:39.536386 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:36:39.539064 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:36:39.618028 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:36:39.621665 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 21 12:36:39.638484 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 21 12:36:39.639760 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:36:39.643600 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:36:39.644780 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:36:39.648684 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 21 12:36:39.654436 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Mar 21 12:36:39.666531 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Mar 21 12:36:39.666704 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 21 12:36:39.666717 kernel: GPT:9289727 != 19775487 Mar 21 12:36:39.666728 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 21 12:36:39.666738 kernel: GPT:9289727 != 19775487 Mar 21 12:36:39.666748 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 21 12:36:39.666765 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:36:39.666775 kernel: cryptd: max_cpu_qlen set to 1000 Mar 21 12:36:39.680188 kernel: AVX2 version of gcm_enc/dec engaged. Mar 21 12:36:39.678020 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:36:39.684265 kernel: AES CTR mode by8 optimization enabled Mar 21 12:36:39.684315 kernel: libata version 3.00 loaded. Mar 21 12:36:39.694247 kernel: ahci 0000:00:1f.2: version 3.0 Mar 21 12:36:39.718395 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Mar 21 12:36:39.718421 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Mar 21 12:36:39.718582 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Mar 21 12:36:39.718736 kernel: scsi host0: ahci Mar 21 12:36:39.718890 kernel: scsi host1: ahci Mar 21 12:36:39.719040 kernel: scsi host2: ahci Mar 21 12:36:39.719185 kernel: scsi host3: ahci Mar 21 12:36:39.719356 kernel: BTRFS: device fsid c99b4410-5d95-4377-8189-88a588aa2514 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (466) Mar 21 12:36:39.719368 kernel: scsi host4: ahci Mar 21 12:36:39.719512 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (464) Mar 21 12:36:39.719524 kernel: scsi host5: ahci Mar 21 12:36:39.719681 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Mar 21 12:36:39.719693 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Mar 21 12:36:39.719703 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Mar 21 12:36:39.719714 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Mar 21 12:36:39.719728 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Mar 21 12:36:39.719738 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Mar 21 12:36:39.701454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:36:39.701627 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:36:39.703932 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:36:39.706415 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:36:39.706682 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:39.708248 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:39.716363 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:39.730816 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 21 12:36:39.763964 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 21 12:36:39.771414 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 21 12:36:39.771677 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 21 12:36:39.782510 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:36:39.784982 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 21 12:36:39.785567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:36:39.785619 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:39.789253 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:39.790417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:39.808754 disk-uuid[558]: Primary Header is updated. Mar 21 12:36:39.808754 disk-uuid[558]: Secondary Entries is updated. Mar 21 12:36:39.808754 disk-uuid[558]: Secondary Header is updated. Mar 21 12:36:39.813311 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:36:39.809319 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:39.814740 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 21 12:36:39.818251 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:36:39.855825 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:36:40.027265 kernel: ata4: SATA link down (SStatus 0 SControl 300) Mar 21 12:36:40.027352 kernel: ata1: SATA link down (SStatus 0 SControl 300) Mar 21 12:36:40.028252 kernel: ata2: SATA link down (SStatus 0 SControl 300) Mar 21 12:36:40.028270 kernel: ata6: SATA link down (SStatus 0 SControl 300) Mar 21 12:36:40.029259 kernel: ata5: SATA link down (SStatus 0 SControl 300) Mar 21 12:36:40.030257 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Mar 21 12:36:40.031473 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Mar 21 12:36:40.031500 kernel: ata3.00: applying bridge limits Mar 21 12:36:40.032253 kernel: ata3.00: configured for UDMA/100 Mar 21 12:36:40.033259 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Mar 21 12:36:40.082275 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Mar 21 12:36:40.095934 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Mar 21 12:36:40.095951 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Mar 21 12:36:40.818286 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 21 12:36:40.818802 disk-uuid[560]: The operation has completed successfully. Mar 21 12:36:40.849718 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 21 12:36:40.849880 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 21 12:36:40.890953 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 21 12:36:40.909499 sh[599]: Success Mar 21 12:36:40.921306 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Mar 21 12:36:40.955224 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 21 12:36:40.959090 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 21 12:36:40.972761 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 21 12:36:40.979214 kernel: BTRFS info (device dm-0): first mount of filesystem c99b4410-5d95-4377-8189-88a588aa2514 Mar 21 12:36:40.979271 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:36:40.979289 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 21 12:36:40.981362 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 21 12:36:40.981378 kernel: BTRFS info (device dm-0): using free space tree Mar 21 12:36:40.986109 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 21 12:36:40.988292 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 21 12:36:40.990348 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 21 12:36:40.991408 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 21 12:36:41.024725 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:36:41.024765 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:36:41.024780 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:36:41.028298 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:36:41.033261 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:36:41.038904 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 21 12:36:41.041856 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 21 12:36:41.099217 ignition[694]: Ignition 2.20.0 Mar 21 12:36:41.099832 ignition[694]: Stage: fetch-offline Mar 21 12:36:41.099890 ignition[694]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:36:41.099901 ignition[694]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:36:41.100008 ignition[694]: parsed url from cmdline: "" Mar 21 12:36:41.100012 ignition[694]: no config URL provided Mar 21 12:36:41.100018 ignition[694]: reading system config file "/usr/lib/ignition/user.ign" Mar 21 12:36:41.100027 ignition[694]: no config at "/usr/lib/ignition/user.ign" Mar 21 12:36:41.100059 ignition[694]: op(1): [started] loading QEMU firmware config module Mar 21 12:36:41.100065 ignition[694]: op(1): executing: "modprobe" "qemu_fw_cfg" Mar 21 12:36:41.108358 ignition[694]: op(1): [finished] loading QEMU firmware config module Mar 21 12:36:41.130439 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:36:41.135220 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:36:41.151943 ignition[694]: parsing config with SHA512: 28667683a6df8202541ef440ab0b7ead510684df0eb38c49eb60d6d82b7154ee8a4cc2699c994ee428588ea94f8ce018dfc1a73bab4e993d329f90ff994bf702 Mar 21 12:36:41.157755 unknown[694]: fetched base config from "system" Mar 21 12:36:41.157765 unknown[694]: fetched user config from "qemu" Mar 21 12:36:41.158104 ignition[694]: fetch-offline: fetch-offline passed Mar 21 12:36:41.158166 ignition[694]: Ignition finished successfully Mar 21 12:36:41.161378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:36:41.179554 systemd-networkd[787]: lo: Link UP Mar 21 12:36:41.179566 systemd-networkd[787]: lo: Gained carrier Mar 21 12:36:41.182577 systemd-networkd[787]: Enumeration completed Mar 21 12:36:41.182660 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:36:41.183247 systemd[1]: Reached target network.target - Network. Mar 21 12:36:41.183686 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Mar 21 12:36:41.184490 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 21 12:36:41.187660 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:36:41.187665 systemd-networkd[787]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:36:41.188427 systemd-networkd[787]: eth0: Link UP Mar 21 12:36:41.188431 systemd-networkd[787]: eth0: Gained carrier Mar 21 12:36:41.188437 systemd-networkd[787]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:36:41.215302 systemd-networkd[787]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:36:41.218856 ignition[791]: Ignition 2.20.0 Mar 21 12:36:41.218871 ignition[791]: Stage: kargs Mar 21 12:36:41.219062 ignition[791]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:36:41.219748 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:36:41.223379 ignition[791]: kargs: kargs passed Mar 21 12:36:41.224114 ignition[791]: Ignition finished successfully Mar 21 12:36:41.227873 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 21 12:36:41.229144 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 21 12:36:41.260313 ignition[801]: Ignition 2.20.0 Mar 21 12:36:41.260323 ignition[801]: Stage: disks Mar 21 12:36:41.260475 ignition[801]: no configs at "/usr/lib/ignition/base.d" Mar 21 12:36:41.260485 ignition[801]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:36:41.261284 ignition[801]: disks: disks passed Mar 21 12:36:41.261326 ignition[801]: Ignition finished successfully Mar 21 12:36:41.267110 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 21 12:36:41.269199 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 21 12:36:41.269497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 21 12:36:41.271552 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:36:41.273878 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:36:41.275860 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:36:41.279051 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 21 12:36:41.306052 systemd-fsck[812]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 21 12:36:41.312556 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 21 12:36:41.316307 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 21 12:36:41.421266 kernel: EXT4-fs (vda9): mounted filesystem c540419e-275b-4bd7-8ebd-24b19ec75c0b r/w with ordered data mode. Quota mode: none. Mar 21 12:36:41.421929 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 21 12:36:41.423427 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 21 12:36:41.425920 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:36:41.427723 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 21 12:36:41.428899 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 21 12:36:41.428949 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 21 12:36:41.428981 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:36:41.453938 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 21 12:36:41.456005 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 21 12:36:41.461594 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (820) Mar 21 12:36:41.461622 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:36:41.461635 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:36:41.461648 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:36:41.463249 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:36:41.464768 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:36:41.496307 initrd-setup-root[844]: cut: /sysroot/etc/passwd: No such file or directory Mar 21 12:36:41.501813 initrd-setup-root[851]: cut: /sysroot/etc/group: No such file or directory Mar 21 12:36:41.506910 initrd-setup-root[858]: cut: /sysroot/etc/shadow: No such file or directory Mar 21 12:36:41.511890 initrd-setup-root[865]: cut: /sysroot/etc/gshadow: No such file or directory Mar 21 12:36:41.597873 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 21 12:36:41.599488 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 21 12:36:41.601243 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 21 12:36:41.617254 kernel: BTRFS info (device vda6): last unmount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:36:41.633444 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 21 12:36:41.646650 ignition[933]: INFO : Ignition 2.20.0 Mar 21 12:36:41.646650 ignition[933]: INFO : Stage: mount Mar 21 12:36:41.648434 ignition[933]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:36:41.648434 ignition[933]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:36:41.648434 ignition[933]: INFO : mount: mount passed Mar 21 12:36:41.648434 ignition[933]: INFO : Ignition finished successfully Mar 21 12:36:41.649791 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 21 12:36:41.652642 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 21 12:36:41.978276 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 21 12:36:41.979889 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 21 12:36:42.002258 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (946) Mar 21 12:36:42.002301 kernel: BTRFS info (device vda6): first mount of filesystem 667b391b-b0e4-4f87-a670-43615a660c46 Mar 21 12:36:42.003687 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 21 12:36:42.003701 kernel: BTRFS info (device vda6): using free space tree Mar 21 12:36:42.007259 kernel: BTRFS info (device vda6): auto enabling async discard Mar 21 12:36:42.007981 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 21 12:36:42.037196 ignition[963]: INFO : Ignition 2.20.0 Mar 21 12:36:42.037196 ignition[963]: INFO : Stage: files Mar 21 12:36:42.039060 ignition[963]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:36:42.039060 ignition[963]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:36:42.039060 ignition[963]: DEBUG : files: compiled without relabeling support, skipping Mar 21 12:36:42.039060 ignition[963]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 21 12:36:42.039060 ignition[963]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 21 12:36:42.045918 ignition[963]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 21 12:36:42.045918 ignition[963]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 21 12:36:42.045918 ignition[963]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 21 12:36:42.045918 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 21 12:36:42.045918 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Mar 21 12:36:42.041689 unknown[963]: wrote ssh authorized keys file for user: core Mar 21 12:36:42.082398 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 21 12:36:42.165681 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 21 12:36:42.167783 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Mar 21 12:36:42.645290 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 21 12:36:42.987120 ignition[963]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Mar 21 12:36:42.987120 ignition[963]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Mar 21 12:36:42.990893 ignition[963]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Mar 21 12:36:43.010121 ignition[963]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:36:43.014442 ignition[963]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Mar 21 12:36:43.016005 ignition[963]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Mar 21 12:36:43.016005 ignition[963]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Mar 21 12:36:43.016005 ignition[963]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Mar 21 12:36:43.016005 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:36:43.016005 ignition[963]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 21 12:36:43.016005 ignition[963]: INFO : files: files passed Mar 21 12:36:43.016005 ignition[963]: INFO : Ignition finished successfully Mar 21 12:36:43.017689 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 21 12:36:43.021043 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 21 12:36:43.023081 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 21 12:36:43.035986 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 21 12:36:43.036109 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 21 12:36:43.039411 initrd-setup-root-after-ignition[992]: grep: /sysroot/oem/oem-release: No such file or directory Mar 21 12:36:43.040826 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:36:43.040826 initrd-setup-root-after-ignition[995]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:36:43.045045 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 21 12:36:43.043325 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:36:43.045316 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 21 12:36:43.048358 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 21 12:36:43.098650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 21 12:36:43.098775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 21 12:36:43.100440 systemd-networkd[787]: eth0: Gained IPv6LL Mar 21 12:36:43.101154 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 21 12:36:43.103204 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 21 12:36:43.104406 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 21 12:36:43.105150 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 21 12:36:43.127805 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:36:43.129492 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 21 12:36:43.155082 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:36:43.156432 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:36:43.158628 systemd[1]: Stopped target timers.target - Timer Units. Mar 21 12:36:43.160602 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 21 12:36:43.160735 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 21 12:36:43.162809 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 21 12:36:43.164527 systemd[1]: Stopped target basic.target - Basic System. Mar 21 12:36:43.166568 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 21 12:36:43.168576 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 21 12:36:43.170561 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 21 12:36:43.172696 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 21 12:36:43.174783 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 21 12:36:43.177039 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 21 12:36:43.179001 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 21 12:36:43.181184 systemd[1]: Stopped target swap.target - Swaps. Mar 21 12:36:43.182914 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 21 12:36:43.183047 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 21 12:36:43.185141 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:36:43.186790 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:36:43.188985 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 21 12:36:43.189111 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:36:43.191079 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 21 12:36:43.191212 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 21 12:36:43.193380 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 21 12:36:43.193505 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 21 12:36:43.195454 systemd[1]: Stopped target paths.target - Path Units. Mar 21 12:36:43.197185 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 21 12:36:43.201280 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:36:43.202731 systemd[1]: Stopped target slices.target - Slice Units. Mar 21 12:36:43.204686 systemd[1]: Stopped target sockets.target - Socket Units. Mar 21 12:36:43.206432 systemd[1]: iscsid.socket: Deactivated successfully. Mar 21 12:36:43.206529 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 21 12:36:43.208436 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 21 12:36:43.208525 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 21 12:36:43.210876 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 21 12:36:43.210992 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 21 12:36:43.212904 systemd[1]: ignition-files.service: Deactivated successfully. Mar 21 12:36:43.213010 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 21 12:36:43.215559 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 21 12:36:43.217285 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 21 12:36:43.217401 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:36:43.230512 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 21 12:36:43.231421 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 21 12:36:43.231547 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:36:43.233520 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 21 12:36:43.233632 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 21 12:36:43.240008 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 21 12:36:43.240123 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 21 12:36:43.244265 ignition[1020]: INFO : Ignition 2.20.0 Mar 21 12:36:43.245168 ignition[1020]: INFO : Stage: umount Mar 21 12:36:43.245168 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 21 12:36:43.245168 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Mar 21 12:36:43.249112 ignition[1020]: INFO : umount: umount passed Mar 21 12:36:43.249112 ignition[1020]: INFO : Ignition finished successfully Mar 21 12:36:43.247202 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 21 12:36:43.247328 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 21 12:36:43.249614 systemd[1]: Stopped target network.target - Network. Mar 21 12:36:43.250794 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 21 12:36:43.250854 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 21 12:36:43.252695 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 21 12:36:43.252744 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 21 12:36:43.254625 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 21 12:36:43.254673 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 21 12:36:43.256634 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 21 12:36:43.256682 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 21 12:36:43.258612 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 21 12:36:43.260838 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 21 12:36:43.264174 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 21 12:36:43.269197 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 21 12:36:43.269333 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 21 12:36:43.273720 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 21 12:36:43.273916 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 21 12:36:43.274034 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 21 12:36:43.278066 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 21 12:36:43.278880 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 21 12:36:43.278947 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:36:43.281886 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 21 12:36:43.283397 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 21 12:36:43.283465 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 21 12:36:43.286053 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 21 12:36:43.286115 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:36:43.288052 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 21 12:36:43.288102 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 21 12:36:43.290119 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 21 12:36:43.290169 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:36:43.292465 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:36:43.295620 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 21 12:36:43.295699 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:36:43.311160 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 21 12:36:43.311409 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:36:43.313815 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 21 12:36:43.313918 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 21 12:36:43.316174 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 21 12:36:43.316309 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 21 12:36:43.317613 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 21 12:36:43.317654 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:36:43.319545 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 21 12:36:43.319598 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 21 12:36:43.321800 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 21 12:36:43.321850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 21 12:36:43.323788 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 21 12:36:43.323839 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 21 12:36:43.326605 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 21 12:36:43.327991 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 21 12:36:43.328045 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:36:43.330269 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:36:43.330318 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:43.333285 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 21 12:36:43.333353 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 21 12:36:43.346153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 21 12:36:43.346287 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 21 12:36:43.417664 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 21 12:36:43.417815 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 21 12:36:43.418735 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 21 12:36:43.420582 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 21 12:36:43.420642 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 21 12:36:43.425382 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 21 12:36:43.448851 systemd[1]: Switching root. Mar 21 12:36:43.479350 systemd-journald[191]: Journal stopped Mar 21 12:36:44.594276 systemd-journald[191]: Received SIGTERM from PID 1 (systemd). Mar 21 12:36:44.594343 kernel: SELinux: policy capability network_peer_controls=1 Mar 21 12:36:44.594357 kernel: SELinux: policy capability open_perms=1 Mar 21 12:36:44.594372 kernel: SELinux: policy capability extended_socket_class=1 Mar 21 12:36:44.594384 kernel: SELinux: policy capability always_check_network=0 Mar 21 12:36:44.594395 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 21 12:36:44.594407 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 21 12:36:44.594418 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 21 12:36:44.594430 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 21 12:36:44.594441 kernel: audit: type=1403 audit(1742560603.823:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 21 12:36:44.594454 systemd[1]: Successfully loaded SELinux policy in 38.724ms. Mar 21 12:36:44.594482 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.012ms. Mar 21 12:36:44.594505 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 21 12:36:44.594518 systemd[1]: Detected virtualization kvm. Mar 21 12:36:44.594530 systemd[1]: Detected architecture x86-64. Mar 21 12:36:44.594542 systemd[1]: Detected first boot. Mar 21 12:36:44.594559 systemd[1]: Initializing machine ID from VM UUID. Mar 21 12:36:44.594571 zram_generator::config[1065]: No configuration found. Mar 21 12:36:44.594590 kernel: Guest personality initialized and is inactive Mar 21 12:36:44.594602 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 21 12:36:44.594616 kernel: Initialized host personality Mar 21 12:36:44.594629 kernel: NET: Registered PF_VSOCK protocol family Mar 21 12:36:44.594641 systemd[1]: Populated /etc with preset unit settings. Mar 21 12:36:44.594654 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 21 12:36:44.594672 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 21 12:36:44.594684 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 21 12:36:44.594696 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 21 12:36:44.594709 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 21 12:36:44.594721 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 21 12:36:44.594741 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 21 12:36:44.594754 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 21 12:36:44.594766 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 21 12:36:44.594778 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 21 12:36:44.594790 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 21 12:36:44.594802 systemd[1]: Created slice user.slice - User and Session Slice. Mar 21 12:36:44.594815 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 21 12:36:44.594827 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 21 12:36:44.594839 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 21 12:36:44.594854 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 21 12:36:44.594866 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 21 12:36:44.594879 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 21 12:36:44.594891 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 21 12:36:44.594905 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 21 12:36:44.594917 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 21 12:36:44.594930 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 21 12:36:44.594944 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 21 12:36:44.594956 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 21 12:36:44.594968 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 21 12:36:44.594980 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 21 12:36:44.594992 systemd[1]: Reached target slices.target - Slice Units. Mar 21 12:36:44.595005 systemd[1]: Reached target swap.target - Swaps. Mar 21 12:36:44.595017 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 21 12:36:44.595029 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 21 12:36:44.595041 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 21 12:36:44.595056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 21 12:36:44.595068 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 21 12:36:44.595080 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 21 12:36:44.595092 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 21 12:36:44.595105 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 21 12:36:44.595117 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 21 12:36:44.595130 systemd[1]: Mounting media.mount - External Media Directory... Mar 21 12:36:44.595142 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:44.595154 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 21 12:36:44.595170 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 21 12:36:44.595182 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 21 12:36:44.595194 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 21 12:36:44.595206 systemd[1]: Reached target machines.target - Containers. Mar 21 12:36:44.595218 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 21 12:36:44.595244 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:36:44.595256 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 21 12:36:44.595268 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 21 12:36:44.595283 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:36:44.595295 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:36:44.595308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:36:44.595320 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 21 12:36:44.595332 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:36:44.595344 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 21 12:36:44.595357 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 21 12:36:44.595369 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 21 12:36:44.595381 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 21 12:36:44.595396 systemd[1]: Stopped systemd-fsck-usr.service. Mar 21 12:36:44.595408 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:36:44.595421 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 21 12:36:44.595433 kernel: fuse: init (API version 7.39) Mar 21 12:36:44.595446 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 21 12:36:44.595458 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 21 12:36:44.595470 kernel: loop: module loaded Mar 21 12:36:44.595482 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 21 12:36:44.595494 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 21 12:36:44.595515 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 21 12:36:44.595528 systemd[1]: verity-setup.service: Deactivated successfully. Mar 21 12:36:44.595541 systemd[1]: Stopped verity-setup.service. Mar 21 12:36:44.595553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:44.595584 systemd-journald[1143]: Collecting audit messages is disabled. Mar 21 12:36:44.595605 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 21 12:36:44.595618 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 21 12:36:44.595630 systemd-journald[1143]: Journal started Mar 21 12:36:44.595653 systemd-journald[1143]: Runtime Journal (/run/log/journal/3da644135bf544bb94f3f38096d6e374) is 6M, max 47.9M, 41.9M free. Mar 21 12:36:44.375471 systemd[1]: Queued start job for default target multi-user.target. Mar 21 12:36:44.390119 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 21 12:36:44.390613 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 21 12:36:44.599729 systemd[1]: Started systemd-journald.service - Journal Service. Mar 21 12:36:44.600016 systemd[1]: Mounted media.mount - External Media Directory. Mar 21 12:36:44.601126 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 21 12:36:44.602414 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 21 12:36:44.603648 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 21 12:36:44.605150 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 21 12:36:44.607579 kernel: ACPI: bus type drm_connector registered Mar 21 12:36:44.608202 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 21 12:36:44.610030 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 21 12:36:44.610302 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 21 12:36:44.611826 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:36:44.612053 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:36:44.613749 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:36:44.613977 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:36:44.615376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:36:44.615592 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:36:44.617241 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 21 12:36:44.617456 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 21 12:36:44.618856 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:36:44.619059 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:36:44.620813 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 21 12:36:44.622261 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 21 12:36:44.623826 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 21 12:36:44.625455 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 21 12:36:44.641504 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 21 12:36:44.644244 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 21 12:36:44.646509 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 21 12:36:44.647770 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 21 12:36:44.647850 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 21 12:36:44.649924 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 21 12:36:44.658559 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 21 12:36:44.661338 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 21 12:36:44.662486 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:36:44.663706 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 21 12:36:44.665963 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 21 12:36:44.667218 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:36:44.671958 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 21 12:36:44.675183 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:36:44.678408 systemd-journald[1143]: Time spent on flushing to /var/log/journal/3da644135bf544bb94f3f38096d6e374 is 14.081ms for 1025 entries. Mar 21 12:36:44.678408 systemd-journald[1143]: System Journal (/var/log/journal/3da644135bf544bb94f3f38096d6e374) is 8M, max 195.6M, 187.6M free. Mar 21 12:36:44.711516 systemd-journald[1143]: Received client request to flush runtime journal. Mar 21 12:36:44.676557 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 21 12:36:44.681444 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 21 12:36:44.684293 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 21 12:36:44.693327 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 21 12:36:44.696431 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 21 12:36:44.697753 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 21 12:36:44.699474 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 21 12:36:44.701101 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 21 12:36:44.706649 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 21 12:36:44.710732 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 21 12:36:44.714745 kernel: loop0: detected capacity change from 0 to 218376 Mar 21 12:36:44.715141 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 21 12:36:44.725690 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 21 12:36:44.734444 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 21 12:36:44.738387 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 21 12:36:44.740544 udevadm[1196]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 21 12:36:44.749405 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 21 12:36:44.752597 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 21 12:36:44.764518 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 21 12:36:44.766380 kernel: loop1: detected capacity change from 0 to 151640 Mar 21 12:36:44.782845 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 21 12:36:44.782864 systemd-tmpfiles[1204]: ACLs are not supported, ignoring. Mar 21 12:36:44.789927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 21 12:36:44.803261 kernel: loop2: detected capacity change from 0 to 109808 Mar 21 12:36:44.839266 kernel: loop3: detected capacity change from 0 to 218376 Mar 21 12:36:44.849618 kernel: loop4: detected capacity change from 0 to 151640 Mar 21 12:36:44.863417 kernel: loop5: detected capacity change from 0 to 109808 Mar 21 12:36:44.871018 (sd-merge)[1210]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Mar 21 12:36:44.871648 (sd-merge)[1210]: Merged extensions into '/usr'. Mar 21 12:36:44.876592 systemd[1]: Reload requested from client PID 1185 ('systemd-sysext') (unit systemd-sysext.service)... Mar 21 12:36:44.876613 systemd[1]: Reloading... Mar 21 12:36:44.945253 zram_generator::config[1238]: No configuration found. Mar 21 12:36:44.988357 ldconfig[1180]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 21 12:36:45.065681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:36:45.129270 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 21 12:36:45.129826 systemd[1]: Reloading finished in 252 ms. Mar 21 12:36:45.146862 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 21 12:36:45.148426 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 21 12:36:45.169568 systemd[1]: Starting ensure-sysext.service... Mar 21 12:36:45.171495 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 21 12:36:45.192560 systemd[1]: Reload requested from client PID 1275 ('systemctl') (unit ensure-sysext.service)... Mar 21 12:36:45.192672 systemd[1]: Reloading... Mar 21 12:36:45.193372 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 21 12:36:45.193657 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 21 12:36:45.194593 systemd-tmpfiles[1277]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 21 12:36:45.194859 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Mar 21 12:36:45.194936 systemd-tmpfiles[1277]: ACLs are not supported, ignoring. Mar 21 12:36:45.198955 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:36:45.198969 systemd-tmpfiles[1277]: Skipping /boot Mar 21 12:36:45.211666 systemd-tmpfiles[1277]: Detected autofs mount point /boot during canonicalization of boot. Mar 21 12:36:45.211682 systemd-tmpfiles[1277]: Skipping /boot Mar 21 12:36:45.246268 zram_generator::config[1304]: No configuration found. Mar 21 12:36:45.360838 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:36:45.425323 systemd[1]: Reloading finished in 232 ms. Mar 21 12:36:45.438895 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 21 12:36:45.454934 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 21 12:36:45.464526 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:36:45.466943 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 21 12:36:45.479852 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 21 12:36:45.483041 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 21 12:36:45.489143 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 21 12:36:45.491571 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 21 12:36:45.498465 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:45.498640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:36:45.500369 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:36:45.508028 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:36:45.510560 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:36:45.511747 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:36:45.511845 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:36:45.513580 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 21 12:36:45.515410 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:45.524700 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 21 12:36:45.527495 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:36:45.527873 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:36:45.529763 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:36:45.529977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:36:45.531887 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:36:45.532276 systemd-udevd[1353]: Using default interface naming scheme 'v255'. Mar 21 12:36:45.532449 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:36:45.537877 augenrules[1374]: No rules Mar 21 12:36:45.539038 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:36:45.539447 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:36:45.547454 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:45.547682 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:36:45.549927 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:36:45.552349 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:36:45.556407 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:36:45.557599 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:36:45.557713 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:36:45.562437 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 21 12:36:45.564282 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:45.565723 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 21 12:36:45.567853 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 21 12:36:45.569680 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 21 12:36:45.579342 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 21 12:36:45.581084 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:36:45.581794 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:36:45.583439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:36:45.584120 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:36:45.585909 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:36:45.586254 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:36:45.608632 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:45.612743 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:36:45.613888 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 21 12:36:45.619296 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 21 12:36:45.621537 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 21 12:36:45.626045 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 21 12:36:45.634290 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1409) Mar 21 12:36:45.635588 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 21 12:36:45.636813 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 21 12:36:45.636924 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 21 12:36:45.639620 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 21 12:36:45.640695 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 21 12:36:45.640797 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 21 12:36:45.642631 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 21 12:36:45.649982 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 21 12:36:45.650253 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 21 12:36:45.652100 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 21 12:36:45.652434 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 21 12:36:45.654010 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 21 12:36:45.654486 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 21 12:36:45.663208 systemd[1]: Finished ensure-sysext.service. Mar 21 12:36:45.668739 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 21 12:36:45.672987 augenrules[1417]: /sbin/augenrules: No change Mar 21 12:36:45.683668 augenrules[1449]: No rules Mar 21 12:36:45.683963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 21 12:36:45.684302 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 21 12:36:45.686045 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:36:45.686899 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:36:45.697869 systemd-resolved[1348]: Positive Trust Anchors: Mar 21 12:36:45.697969 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 21 12:36:45.699452 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 21 12:36:45.699496 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 21 12:36:45.704242 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 21 12:36:45.705189 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 21 12:36:45.706398 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 21 12:36:45.706453 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 21 12:36:45.707360 systemd-resolved[1348]: Defaulting to hostname 'linux'. Mar 21 12:36:45.710430 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 21 12:36:45.712123 kernel: ACPI: button: Power Button [PWRF] Mar 21 12:36:45.712328 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 21 12:36:45.714455 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 21 12:36:45.732927 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 21 12:36:45.741714 systemd-networkd[1424]: lo: Link UP Mar 21 12:36:45.741725 systemd-networkd[1424]: lo: Gained carrier Mar 21 12:36:45.743589 systemd-networkd[1424]: Enumeration completed Mar 21 12:36:45.743696 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 21 12:36:45.743964 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:36:45.743977 systemd-networkd[1424]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 21 12:36:45.744937 systemd-networkd[1424]: eth0: Link UP Mar 21 12:36:45.744946 systemd-networkd[1424]: eth0: Gained carrier Mar 21 12:36:45.744959 systemd-networkd[1424]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 21 12:36:45.745073 systemd[1]: Reached target network.target - Network. Mar 21 12:36:45.748919 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 21 12:36:45.751363 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 21 12:36:45.753998 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Mar 21 12:36:45.758706 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Mar 21 12:36:45.758885 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Mar 21 12:36:45.759825 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Mar 21 12:36:45.758628 systemd-networkd[1424]: eth0: DHCPv4 address 10.0.0.113/16, gateway 10.0.0.1 acquired from 10.0.0.1 Mar 21 12:36:45.772248 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 21 12:36:45.777701 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 21 12:36:45.784190 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 21 12:36:45.785782 systemd[1]: Reached target time-set.target - System Time Set. Mar 21 12:36:45.786305 systemd-timesyncd[1460]: Contacted time server 10.0.0.1:123 (10.0.0.1). Mar 21 12:36:45.787376 systemd-timesyncd[1460]: Initial clock synchronization to Fri 2025-03-21 12:36:45.936737 UTC. Mar 21 12:36:45.817599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:45.822062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 21 12:36:45.822361 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:45.824274 kernel: mousedev: PS/2 mouse device common for all mice Mar 21 12:36:45.826535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 21 12:36:45.882583 kernel: kvm_amd: TSC scaling supported Mar 21 12:36:45.882618 kernel: kvm_amd: Nested Virtualization enabled Mar 21 12:36:45.882632 kernel: kvm_amd: Nested Paging enabled Mar 21 12:36:45.882656 kernel: kvm_amd: LBR virtualization supported Mar 21 12:36:45.883683 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Mar 21 12:36:45.883705 kernel: kvm_amd: Virtual GIF supported Mar 21 12:36:45.905275 kernel: EDAC MC: Ver: 3.0.0 Mar 21 12:36:45.929508 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 21 12:36:45.939560 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 21 12:36:45.942573 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 21 12:36:45.964499 lvm[1482]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:36:45.994026 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 21 12:36:45.995515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 21 12:36:45.996635 systemd[1]: Reached target sysinit.target - System Initialization. Mar 21 12:36:45.997796 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 21 12:36:45.999041 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 21 12:36:46.000477 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 21 12:36:46.001660 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 21 12:36:46.002920 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 21 12:36:46.004167 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 21 12:36:46.004190 systemd[1]: Reached target paths.target - Path Units. Mar 21 12:36:46.005097 systemd[1]: Reached target timers.target - Timer Units. Mar 21 12:36:46.006712 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 21 12:36:46.009361 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 21 12:36:46.012850 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 21 12:36:46.014317 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 21 12:36:46.015582 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 21 12:36:46.019346 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 21 12:36:46.020879 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 21 12:36:46.023335 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 21 12:36:46.024987 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 21 12:36:46.026176 systemd[1]: Reached target sockets.target - Socket Units. Mar 21 12:36:46.027156 systemd[1]: Reached target basic.target - Basic System. Mar 21 12:36:46.028143 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:36:46.028172 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 21 12:36:46.029116 systemd[1]: Starting containerd.service - containerd container runtime... Mar 21 12:36:46.031172 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 21 12:36:46.034964 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 21 12:36:46.035352 lvm[1486]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 21 12:36:46.038810 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 21 12:36:46.039999 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 21 12:36:46.042356 jq[1489]: false Mar 21 12:36:46.043450 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 21 12:36:46.046658 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 21 12:36:46.052084 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 21 12:36:46.055994 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 21 12:36:46.058577 extend-filesystems[1490]: Found loop3 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found loop4 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found loop5 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found sr0 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda1 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda2 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda3 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found usr Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda4 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda6 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda7 Mar 21 12:36:46.059805 extend-filesystems[1490]: Found vda9 Mar 21 12:36:46.059805 extend-filesystems[1490]: Checking size of /dev/vda9 Mar 21 12:36:46.065202 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 21 12:36:46.060964 dbus-daemon[1488]: [system] SELinux support is enabled Mar 21 12:36:46.071139 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 21 12:36:46.073428 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 21 12:36:46.076583 systemd[1]: Starting update-engine.service - Update Engine... Mar 21 12:36:46.079003 extend-filesystems[1490]: Resized partition /dev/vda9 Mar 21 12:36:46.083831 extend-filesystems[1509]: resize2fs 1.47.2 (1-Jan-2025) Mar 21 12:36:46.088363 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1410) Mar 21 12:36:46.088309 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 21 12:36:46.093908 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 21 12:36:46.100460 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Mar 21 12:36:46.100001 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 21 12:36:46.102713 jq[1512]: true Mar 21 12:36:46.102630 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 21 12:36:46.103446 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 21 12:36:46.103899 systemd[1]: motdgen.service: Deactivated successfully. Mar 21 12:36:46.104163 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 21 12:36:46.106908 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 21 12:36:46.107284 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 21 12:36:46.112044 update_engine[1508]: I20250321 12:36:46.111979 1508 main.cc:92] Flatcar Update Engine starting Mar 21 12:36:46.113840 update_engine[1508]: I20250321 12:36:46.113142 1508 update_check_scheduler.cc:74] Next update check in 4m5s Mar 21 12:36:46.122261 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Mar 21 12:36:46.127546 jq[1515]: true Mar 21 12:36:46.148782 extend-filesystems[1509]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 21 12:36:46.148782 extend-filesystems[1509]: old_desc_blocks = 1, new_desc_blocks = 1 Mar 21 12:36:46.148782 extend-filesystems[1509]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Mar 21 12:36:46.148601 (ntainerd)[1520]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 21 12:36:46.152729 extend-filesystems[1490]: Resized filesystem in /dev/vda9 Mar 21 12:36:46.154405 systemd-logind[1503]: Watching system buttons on /dev/input/event1 (Power Button) Mar 21 12:36:46.154441 systemd-logind[1503]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 21 12:36:46.155867 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 21 12:36:46.156972 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 21 12:36:46.157191 systemd-logind[1503]: New seat seat0. Mar 21 12:36:46.160650 systemd[1]: Started systemd-logind.service - User Login Management. Mar 21 12:36:46.163499 tar[1514]: linux-amd64/LICENSE Mar 21 12:36:46.167695 tar[1514]: linux-amd64/helm Mar 21 12:36:46.181558 systemd[1]: Started update-engine.service - Update Engine. Mar 21 12:36:46.184109 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 21 12:36:46.184543 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 21 12:36:46.186624 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 21 12:36:46.186811 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 21 12:36:46.188359 bash[1543]: Updated "/home/core/.ssh/authorized_keys" Mar 21 12:36:46.193523 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 21 12:36:46.205575 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 21 12:36:46.209068 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Mar 21 12:36:46.237917 locksmithd[1544]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 21 12:36:46.327612 containerd[1520]: time="2025-03-21T12:36:46Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 21 12:36:46.328363 containerd[1520]: time="2025-03-21T12:36:46.328317029Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 21 12:36:46.338093 containerd[1520]: time="2025-03-21T12:36:46.338054253Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="5.84µs" Mar 21 12:36:46.338093 containerd[1520]: time="2025-03-21T12:36:46.338080384Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 21 12:36:46.338093 containerd[1520]: time="2025-03-21T12:36:46.338099308Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 21 12:36:46.338294 containerd[1520]: time="2025-03-21T12:36:46.338266196Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 21 12:36:46.338294 containerd[1520]: time="2025-03-21T12:36:46.338290633Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 21 12:36:46.338362 containerd[1520]: time="2025-03-21T12:36:46.338316161Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:36:46.338450 containerd[1520]: time="2025-03-21T12:36:46.338380997Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 21 12:36:46.338450 containerd[1520]: time="2025-03-21T12:36:46.338396757Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:36:46.340987 containerd[1520]: time="2025-03-21T12:36:46.340942947Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 21 12:36:46.340987 containerd[1520]: time="2025-03-21T12:36:46.340976223Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:36:46.341053 containerd[1520]: time="2025-03-21T12:36:46.341001466Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 21 12:36:46.341053 containerd[1520]: time="2025-03-21T12:36:46.341013142Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 21 12:36:46.341152 containerd[1520]: time="2025-03-21T12:36:46.341123371Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 21 12:36:46.341465 containerd[1520]: time="2025-03-21T12:36:46.341431641Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:36:46.341489 containerd[1520]: time="2025-03-21T12:36:46.341476899Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 21 12:36:46.341512 containerd[1520]: time="2025-03-21T12:36:46.341492557Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 21 12:36:46.341547 containerd[1520]: time="2025-03-21T12:36:46.341527507Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 21 12:36:46.341819 containerd[1520]: time="2025-03-21T12:36:46.341791446Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 21 12:36:46.341916 containerd[1520]: time="2025-03-21T12:36:46.341890783Z" level=info msg="metadata content store policy set" policy=shared Mar 21 12:36:46.347329 containerd[1520]: time="2025-03-21T12:36:46.347292719Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 21 12:36:46.347367 containerd[1520]: time="2025-03-21T12:36:46.347343275Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 21 12:36:46.347367 containerd[1520]: time="2025-03-21T12:36:46.347362578Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 21 12:36:46.347406 containerd[1520]: time="2025-03-21T12:36:46.347375653Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 21 12:36:46.347406 containerd[1520]: time="2025-03-21T12:36:46.347390168Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 21 12:36:46.347406 containerd[1520]: time="2025-03-21T12:36:46.347401610Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 21 12:36:46.347483 containerd[1520]: time="2025-03-21T12:36:46.347415420Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 21 12:36:46.347483 containerd[1520]: time="2025-03-21T12:36:46.347428843Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 21 12:36:46.347483 containerd[1520]: time="2025-03-21T12:36:46.347443592Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 21 12:36:46.347483 containerd[1520]: time="2025-03-21T12:36:46.347468835Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 21 12:36:46.347483 containerd[1520]: time="2025-03-21T12:36:46.347478379Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 21 12:36:46.347571 containerd[1520]: time="2025-03-21T12:36:46.347490035Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 21 12:36:46.347637 containerd[1520]: time="2025-03-21T12:36:46.347607501Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 21 12:36:46.347637 containerd[1520]: time="2025-03-21T12:36:46.347632580Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 21 12:36:46.347678 containerd[1520]: time="2025-03-21T12:36:46.347645686Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 21 12:36:46.347678 containerd[1520]: time="2025-03-21T12:36:46.347656945Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 21 12:36:46.347678 containerd[1520]: time="2025-03-21T12:36:46.347667601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 21 12:36:46.347678 containerd[1520]: time="2025-03-21T12:36:46.347678788Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 21 12:36:46.347764 containerd[1520]: time="2025-03-21T12:36:46.347695140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 21 12:36:46.347764 containerd[1520]: time="2025-03-21T12:36:46.347706694Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 21 12:36:46.347764 containerd[1520]: time="2025-03-21T12:36:46.347717596Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 21 12:36:46.347764 containerd[1520]: time="2025-03-21T12:36:46.347730416Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 21 12:36:46.347764 containerd[1520]: time="2025-03-21T12:36:46.347741165Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 21 12:36:46.347862 containerd[1520]: time="2025-03-21T12:36:46.347796947Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 21 12:36:46.347862 containerd[1520]: time="2025-03-21T12:36:46.347809880Z" level=info msg="Start snapshots syncer" Mar 21 12:36:46.347862 containerd[1520]: time="2025-03-21T12:36:46.347831907Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 21 12:36:46.348104 containerd[1520]: time="2025-03-21T12:36:46.348061897Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 21 12:36:46.348214 containerd[1520]: time="2025-03-21T12:36:46.348120528Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 21 12:36:46.348313 containerd[1520]: time="2025-03-21T12:36:46.348283660Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 21 12:36:46.348446 containerd[1520]: time="2025-03-21T12:36:46.348417171Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 21 12:36:46.348446 containerd[1520]: time="2025-03-21T12:36:46.348443342Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 21 12:36:46.348488 containerd[1520]: time="2025-03-21T12:36:46.348459653Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 21 12:36:46.348488 containerd[1520]: time="2025-03-21T12:36:46.348471402Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 21 12:36:46.348488 containerd[1520]: time="2025-03-21T12:36:46.348483140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 21 12:36:46.348556 containerd[1520]: time="2025-03-21T12:36:46.348499156Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 21 12:36:46.348556 containerd[1520]: time="2025-03-21T12:36:46.348510067Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 21 12:36:46.348556 containerd[1520]: time="2025-03-21T12:36:46.348531747Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 21 12:36:46.348556 containerd[1520]: time="2025-03-21T12:36:46.348549191Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 21 12:36:46.348637 containerd[1520]: time="2025-03-21T12:36:46.348559379Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 21 12:36:46.348637 containerd[1520]: time="2025-03-21T12:36:46.348602524Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:36:46.348637 containerd[1520]: time="2025-03-21T12:36:46.348614682Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 21 12:36:46.348637 containerd[1520]: time="2025-03-21T12:36:46.348623470Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:36:46.348637 containerd[1520]: time="2025-03-21T12:36:46.348634054Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 21 12:36:46.348728 containerd[1520]: time="2025-03-21T12:36:46.348643312Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 21 12:36:46.348728 containerd[1520]: time="2025-03-21T12:36:46.348712467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 21 12:36:46.348728 containerd[1520]: time="2025-03-21T12:36:46.348724022Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 21 12:36:46.348787 containerd[1520]: time="2025-03-21T12:36:46.348742007Z" level=info msg="runtime interface created" Mar 21 12:36:46.348787 containerd[1520]: time="2025-03-21T12:36:46.348748264Z" level=info msg="created NRI interface" Mar 21 12:36:46.348787 containerd[1520]: time="2025-03-21T12:36:46.348757960Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 21 12:36:46.348787 containerd[1520]: time="2025-03-21T12:36:46.348768505Z" level=info msg="Connect containerd service" Mar 21 12:36:46.348858 containerd[1520]: time="2025-03-21T12:36:46.348791113Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 21 12:36:46.350558 containerd[1520]: time="2025-03-21T12:36:46.350525765Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 21 12:36:46.394509 sshd_keygen[1511]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 21 12:36:46.418732 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 21 12:36:46.422119 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 21 12:36:46.434284 containerd[1520]: time="2025-03-21T12:36:46.434201188Z" level=info msg="Start subscribing containerd event" Mar 21 12:36:46.434349 containerd[1520]: time="2025-03-21T12:36:46.434282316Z" level=info msg="Start recovering state" Mar 21 12:36:46.434396 containerd[1520]: time="2025-03-21T12:36:46.434383704Z" level=info msg="Start event monitor" Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434399720Z" level=info msg="Start cni network conf syncer for default" Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434408783Z" level=info msg="Start streaming server" Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434418051Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434426299Z" level=info msg="runtime interface starting up..." Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434432230Z" level=info msg="starting plugins..." Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434447959Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434229921Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434624718Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 21 12:36:46.438738 containerd[1520]: time="2025-03-21T12:36:46.434677387Z" level=info msg="containerd successfully booted in 0.107585s" Mar 21 12:36:46.437375 systemd[1]: Started containerd.service - containerd container runtime. Mar 21 12:36:46.441477 systemd[1]: issuegen.service: Deactivated successfully. Mar 21 12:36:46.441858 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 21 12:36:46.444878 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 21 12:36:46.466541 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 21 12:36:46.469606 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 21 12:36:46.472185 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 21 12:36:46.473765 systemd[1]: Reached target getty.target - Login Prompts. Mar 21 12:36:46.568597 tar[1514]: linux-amd64/README.md Mar 21 12:36:46.588751 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 21 12:36:47.581476 systemd-networkd[1424]: eth0: Gained IPv6LL Mar 21 12:36:47.584687 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 21 12:36:47.586572 systemd[1]: Reached target network-online.target - Network is Online. Mar 21 12:36:47.589209 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Mar 21 12:36:47.591732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:36:47.602893 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 21 12:36:47.620062 systemd[1]: coreos-metadata.service: Deactivated successfully. Mar 21 12:36:47.620367 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Mar 21 12:36:47.621994 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 21 12:36:47.625391 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 21 12:36:48.287167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:36:48.288911 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 21 12:36:48.290263 systemd[1]: Startup finished in 737ms (kernel) + 5.124s (initrd) + 4.504s (userspace) = 10.367s. Mar 21 12:36:48.292958 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:36:48.693478 kubelet[1614]: E0321 12:36:48.693285 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:36:48.697480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:36:48.697697 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:36:48.698084 systemd[1]: kubelet.service: Consumed 946ms CPU time, 252.6M memory peak. Mar 21 12:36:52.185475 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 21 12:36:52.186878 systemd[1]: Started sshd@0-10.0.0.113:22-10.0.0.1:58938.service - OpenSSH per-connection server daemon (10.0.0.1:58938). Mar 21 12:36:52.257845 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 58938 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:52.263020 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:52.277921 systemd-logind[1503]: New session 1 of user core. Mar 21 12:36:52.279378 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 21 12:36:52.280850 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 21 12:36:52.311142 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 21 12:36:52.313939 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 21 12:36:52.330669 (systemd)[1631]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 21 12:36:52.333065 systemd-logind[1503]: New session c1 of user core. Mar 21 12:36:52.476674 systemd[1631]: Queued start job for default target default.target. Mar 21 12:36:52.487551 systemd[1631]: Created slice app.slice - User Application Slice. Mar 21 12:36:52.487577 systemd[1631]: Reached target paths.target - Paths. Mar 21 12:36:52.487616 systemd[1631]: Reached target timers.target - Timers. Mar 21 12:36:52.489084 systemd[1631]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 21 12:36:52.500336 systemd[1631]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 21 12:36:52.500464 systemd[1631]: Reached target sockets.target - Sockets. Mar 21 12:36:52.500508 systemd[1631]: Reached target basic.target - Basic System. Mar 21 12:36:52.500561 systemd[1631]: Reached target default.target - Main User Target. Mar 21 12:36:52.500599 systemd[1631]: Startup finished in 160ms. Mar 21 12:36:52.500918 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 21 12:36:52.502499 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 21 12:36:52.563592 systemd[1]: Started sshd@1-10.0.0.113:22-10.0.0.1:58942.service - OpenSSH per-connection server daemon (10.0.0.1:58942). Mar 21 12:36:52.615497 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 58942 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:52.617384 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:52.621745 systemd-logind[1503]: New session 2 of user core. Mar 21 12:36:52.641351 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 21 12:36:52.695422 sshd[1644]: Connection closed by 10.0.0.1 port 58942 Mar 21 12:36:52.695723 sshd-session[1642]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:52.706958 systemd[1]: sshd@1-10.0.0.113:22-10.0.0.1:58942.service: Deactivated successfully. Mar 21 12:36:52.708645 systemd[1]: session-2.scope: Deactivated successfully. Mar 21 12:36:52.710336 systemd-logind[1503]: Session 2 logged out. Waiting for processes to exit. Mar 21 12:36:52.711574 systemd[1]: Started sshd@2-10.0.0.113:22-10.0.0.1:58956.service - OpenSSH per-connection server daemon (10.0.0.1:58956). Mar 21 12:36:52.712308 systemd-logind[1503]: Removed session 2. Mar 21 12:36:52.762412 sshd[1649]: Accepted publickey for core from 10.0.0.1 port 58956 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:52.763775 sshd-session[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:52.767822 systemd-logind[1503]: New session 3 of user core. Mar 21 12:36:52.783359 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 21 12:36:52.833504 sshd[1652]: Connection closed by 10.0.0.1 port 58956 Mar 21 12:36:52.833806 sshd-session[1649]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:52.844974 systemd[1]: sshd@2-10.0.0.113:22-10.0.0.1:58956.service: Deactivated successfully. Mar 21 12:36:52.846779 systemd[1]: session-3.scope: Deactivated successfully. Mar 21 12:36:52.848534 systemd-logind[1503]: Session 3 logged out. Waiting for processes to exit. Mar 21 12:36:52.849741 systemd[1]: Started sshd@3-10.0.0.113:22-10.0.0.1:58972.service - OpenSSH per-connection server daemon (10.0.0.1:58972). Mar 21 12:36:52.850498 systemd-logind[1503]: Removed session 3. Mar 21 12:36:52.902348 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 58972 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:52.903827 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:52.908124 systemd-logind[1503]: New session 4 of user core. Mar 21 12:36:52.917345 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 21 12:36:52.971204 sshd[1660]: Connection closed by 10.0.0.1 port 58972 Mar 21 12:36:52.971566 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:52.987069 systemd[1]: sshd@3-10.0.0.113:22-10.0.0.1:58972.service: Deactivated successfully. Mar 21 12:36:52.988965 systemd[1]: session-4.scope: Deactivated successfully. Mar 21 12:36:52.990589 systemd-logind[1503]: Session 4 logged out. Waiting for processes to exit. Mar 21 12:36:52.991809 systemd[1]: Started sshd@4-10.0.0.113:22-10.0.0.1:58984.service - OpenSSH per-connection server daemon (10.0.0.1:58984). Mar 21 12:36:52.992470 systemd-logind[1503]: Removed session 4. Mar 21 12:36:53.039308 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 58984 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:53.040659 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:53.044592 systemd-logind[1503]: New session 5 of user core. Mar 21 12:36:53.054351 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 21 12:36:53.113489 sudo[1669]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 21 12:36:53.113816 sudo[1669]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:36:53.132282 sudo[1669]: pam_unix(sudo:session): session closed for user root Mar 21 12:36:53.133707 sshd[1668]: Connection closed by 10.0.0.1 port 58984 Mar 21 12:36:53.134095 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:53.153179 systemd[1]: sshd@4-10.0.0.113:22-10.0.0.1:58984.service: Deactivated successfully. Mar 21 12:36:53.154818 systemd[1]: session-5.scope: Deactivated successfully. Mar 21 12:36:53.156569 systemd-logind[1503]: Session 5 logged out. Waiting for processes to exit. Mar 21 12:36:53.157853 systemd[1]: Started sshd@5-10.0.0.113:22-10.0.0.1:58990.service - OpenSSH per-connection server daemon (10.0.0.1:58990). Mar 21 12:36:53.158595 systemd-logind[1503]: Removed session 5. Mar 21 12:36:53.212672 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 58990 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:53.213967 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:53.218215 systemd-logind[1503]: New session 6 of user core. Mar 21 12:36:53.228369 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 21 12:36:53.281220 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 21 12:36:53.281571 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:36:53.285312 sudo[1679]: pam_unix(sudo:session): session closed for user root Mar 21 12:36:53.291038 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 21 12:36:53.291431 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:36:53.300675 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 21 12:36:53.350460 augenrules[1701]: No rules Mar 21 12:36:53.352130 systemd[1]: audit-rules.service: Deactivated successfully. Mar 21 12:36:53.352414 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 21 12:36:53.353585 sudo[1678]: pam_unix(sudo:session): session closed for user root Mar 21 12:36:53.354916 sshd[1677]: Connection closed by 10.0.0.1 port 58990 Mar 21 12:36:53.355193 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Mar 21 12:36:53.372300 systemd[1]: sshd@5-10.0.0.113:22-10.0.0.1:58990.service: Deactivated successfully. Mar 21 12:36:53.374284 systemd[1]: session-6.scope: Deactivated successfully. Mar 21 12:36:53.375974 systemd-logind[1503]: Session 6 logged out. Waiting for processes to exit. Mar 21 12:36:53.377381 systemd[1]: Started sshd@6-10.0.0.113:22-10.0.0.1:58992.service - OpenSSH per-connection server daemon (10.0.0.1:58992). Mar 21 12:36:53.378189 systemd-logind[1503]: Removed session 6. Mar 21 12:36:53.435066 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 58992 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:36:53.436573 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:36:53.440524 systemd-logind[1503]: New session 7 of user core. Mar 21 12:36:53.456368 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 21 12:36:53.508994 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 21 12:36:53.509333 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 21 12:36:53.801882 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 21 12:36:53.815589 (dockerd)[1734]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 21 12:36:54.065922 dockerd[1734]: time="2025-03-21T12:36:54.065596962Z" level=info msg="Starting up" Mar 21 12:36:54.067514 dockerd[1734]: time="2025-03-21T12:36:54.067484986Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 21 12:36:55.113651 dockerd[1734]: time="2025-03-21T12:36:55.113604115Z" level=info msg="Loading containers: start." Mar 21 12:36:55.295258 kernel: Initializing XFRM netlink socket Mar 21 12:36:55.377163 systemd-networkd[1424]: docker0: Link UP Mar 21 12:36:55.442674 dockerd[1734]: time="2025-03-21T12:36:55.442624465Z" level=info msg="Loading containers: done." Mar 21 12:36:55.455885 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2785265573-merged.mount: Deactivated successfully. Mar 21 12:36:55.458770 dockerd[1734]: time="2025-03-21T12:36:55.458721825Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 21 12:36:55.458843 dockerd[1734]: time="2025-03-21T12:36:55.458823568Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 21 12:36:55.458977 dockerd[1734]: time="2025-03-21T12:36:55.458951043Z" level=info msg="Daemon has completed initialization" Mar 21 12:36:55.495392 dockerd[1734]: time="2025-03-21T12:36:55.495298089Z" level=info msg="API listen on /run/docker.sock" Mar 21 12:36:55.495513 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 21 12:36:55.997077 containerd[1520]: time="2025-03-21T12:36:55.997032675Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\"" Mar 21 12:36:56.569474 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3761478627.mount: Deactivated successfully. Mar 21 12:36:57.474017 containerd[1520]: time="2025-03-21T12:36:57.473959163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:57.474790 containerd[1520]: time="2025-03-21T12:36:57.474703854Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.3: active requests=0, bytes read=28682430" Mar 21 12:36:57.475840 containerd[1520]: time="2025-03-21T12:36:57.475805139Z" level=info msg="ImageCreate event name:\"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:57.478140 containerd[1520]: time="2025-03-21T12:36:57.478102585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:57.479012 containerd[1520]: time="2025-03-21T12:36:57.478980308Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.3\" with image id \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:279e45cf07e4f56925c3c5237179eb63616788426a96e94df5fedf728b18926e\", size \"28679230\" in 1.481907552s" Mar 21 12:36:57.479062 containerd[1520]: time="2025-03-21T12:36:57.479013423Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.3\" returns image reference \"sha256:f8bdc4cfa0651e2d7edb4678d2b90129aef82a19249b37dc8d4705e8bd604295\"" Mar 21 12:36:57.479699 containerd[1520]: time="2025-03-21T12:36:57.479663308Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\"" Mar 21 12:36:58.580434 containerd[1520]: time="2025-03-21T12:36:58.580358276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:58.581181 containerd[1520]: time="2025-03-21T12:36:58.581094772Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.3: active requests=0, bytes read=24779684" Mar 21 12:36:58.582324 containerd[1520]: time="2025-03-21T12:36:58.582291717Z" level=info msg="ImageCreate event name:\"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:58.584701 containerd[1520]: time="2025-03-21T12:36:58.584652971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:36:58.585564 containerd[1520]: time="2025-03-21T12:36:58.585535058Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.3\" with image id \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:54456a96a1bbdc35dcc2e70fcc1355bf655af67694e40b650ac12e83521f6411\", size \"26267292\" in 1.10582896s" Mar 21 12:36:58.585606 containerd[1520]: time="2025-03-21T12:36:58.585563791Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.3\" returns image reference \"sha256:085818208a5213f37ef6d103caaf8e1e243816a614eb5b87a98bfffe79c687b5\"" Mar 21 12:36:58.586136 containerd[1520]: time="2025-03-21T12:36:58.586102993Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\"" Mar 21 12:36:58.948139 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 21 12:36:58.949791 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:36:59.133341 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:36:59.151528 (kubelet)[2006]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 21 12:36:59.194509 kubelet[2006]: E0321 12:36:59.194441 2006 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 21 12:36:59.201019 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 21 12:36:59.201268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 21 12:36:59.201656 systemd[1]: kubelet.service: Consumed 218ms CPU time, 104.4M memory peak. Mar 21 12:37:00.271072 containerd[1520]: time="2025-03-21T12:37:00.270993430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:00.271799 containerd[1520]: time="2025-03-21T12:37:00.271745983Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.3: active requests=0, bytes read=19171419" Mar 21 12:37:00.272773 containerd[1520]: time="2025-03-21T12:37:00.272743463Z" level=info msg="ImageCreate event name:\"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:00.275472 containerd[1520]: time="2025-03-21T12:37:00.275442779Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:00.276329 containerd[1520]: time="2025-03-21T12:37:00.276295761Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.3\" with image id \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:aafae2e3a8d65bc6dc3a0c6095c24bc72b1ff608e1417f0f5e860ce4a61c27df\", size \"20659045\" in 1.690164477s" Mar 21 12:37:00.276329 containerd[1520]: time="2025-03-21T12:37:00.276329150Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.3\" returns image reference \"sha256:b4260bf5078ab1b01dd05fb05015fc436b7100b7b9b5ea738e247a86008b16b8\"" Mar 21 12:37:00.276797 containerd[1520]: time="2025-03-21T12:37:00.276774341Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\"" Mar 21 12:37:01.248378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount485247749.mount: Deactivated successfully. Mar 21 12:37:01.508623 containerd[1520]: time="2025-03-21T12:37:01.508490899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:01.509616 containerd[1520]: time="2025-03-21T12:37:01.509574369Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.3: active requests=0, bytes read=30918185" Mar 21 12:37:01.510731 containerd[1520]: time="2025-03-21T12:37:01.510704394Z" level=info msg="ImageCreate event name:\"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:01.512613 containerd[1520]: time="2025-03-21T12:37:01.512585445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:01.513131 containerd[1520]: time="2025-03-21T12:37:01.513090177Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.3\" with image id \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:5015269547a0b7dd2c062758e9a64467b58978ff2502cad4c3f5cdf4aa554ad3\", size \"30917204\" in 1.236284098s" Mar 21 12:37:01.513169 containerd[1520]: time="2025-03-21T12:37:01.513131720Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.3\" returns image reference \"sha256:a1ae78fd2f9d8fc345928378dc947c7f1e95f01c1a552781827071867a95d09c\"" Mar 21 12:37:01.513578 containerd[1520]: time="2025-03-21T12:37:01.513558720Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Mar 21 12:37:02.015283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2505608010.mount: Deactivated successfully. Mar 21 12:37:03.049413 containerd[1520]: time="2025-03-21T12:37:03.049354975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:03.050134 containerd[1520]: time="2025-03-21T12:37:03.050070983Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Mar 21 12:37:03.053166 containerd[1520]: time="2025-03-21T12:37:03.053123527Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:03.055607 containerd[1520]: time="2025-03-21T12:37:03.055551040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:03.056500 containerd[1520]: time="2025-03-21T12:37:03.056477410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.542875966s" Mar 21 12:37:03.056552 containerd[1520]: time="2025-03-21T12:37:03.056506300Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Mar 21 12:37:03.057112 containerd[1520]: time="2025-03-21T12:37:03.057086692Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Mar 21 12:37:03.511115 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706522572.mount: Deactivated successfully. Mar 21 12:37:03.516169 containerd[1520]: time="2025-03-21T12:37:03.516127655Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:37:03.516884 containerd[1520]: time="2025-03-21T12:37:03.516840864Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Mar 21 12:37:03.517961 containerd[1520]: time="2025-03-21T12:37:03.517916714Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:37:03.519851 containerd[1520]: time="2025-03-21T12:37:03.519819888Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 21 12:37:03.520389 containerd[1520]: time="2025-03-21T12:37:03.520357055Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 463.242396ms" Mar 21 12:37:03.520389 containerd[1520]: time="2025-03-21T12:37:03.520383726Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Mar 21 12:37:03.520834 containerd[1520]: time="2025-03-21T12:37:03.520807551Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Mar 21 12:37:04.059955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount105019769.mount: Deactivated successfully. Mar 21 12:37:05.600284 containerd[1520]: time="2025-03-21T12:37:05.600196758Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:05.601170 containerd[1520]: time="2025-03-21T12:37:05.601096449Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Mar 21 12:37:05.602525 containerd[1520]: time="2025-03-21T12:37:05.602475541Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:05.604945 containerd[1520]: time="2025-03-21T12:37:05.604915475Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:05.605846 containerd[1520]: time="2025-03-21T12:37:05.605792480Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.084954876s" Mar 21 12:37:05.605846 containerd[1520]: time="2025-03-21T12:37:05.605823614Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Mar 21 12:37:08.090982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:37:08.091147 systemd[1]: kubelet.service: Consumed 218ms CPU time, 104.4M memory peak. Mar 21 12:37:08.093384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:37:08.116033 systemd[1]: Reload requested from client PID 2165 ('systemctl') (unit session-7.scope)... Mar 21 12:37:08.116049 systemd[1]: Reloading... Mar 21 12:37:08.202259 zram_generator::config[2210]: No configuration found. Mar 21 12:37:08.381811 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:37:08.482022 systemd[1]: Reloading finished in 365 ms. Mar 21 12:37:08.537169 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:37:08.539186 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:37:08.542746 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:37:08.543004 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:37:08.543032 systemd[1]: kubelet.service: Consumed 151ms CPU time, 91.9M memory peak. Mar 21 12:37:08.544566 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:37:08.724837 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:37:08.735604 (kubelet)[2259]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:37:08.774098 kubelet[2259]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:37:08.774098 kubelet[2259]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 21 12:37:08.774098 kubelet[2259]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:37:08.774535 kubelet[2259]: I0321 12:37:08.774160 2259 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:37:08.940292 kubelet[2259]: I0321 12:37:08.940242 2259 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 21 12:37:08.940292 kubelet[2259]: I0321 12:37:08.940279 2259 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:37:08.940664 kubelet[2259]: I0321 12:37:08.940641 2259 server.go:954] "Client rotation is on, will bootstrap in background" Mar 21 12:37:08.961568 kubelet[2259]: E0321 12:37:08.961528 2259 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.113:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:08.962755 kubelet[2259]: I0321 12:37:08.962736 2259 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:37:08.970443 kubelet[2259]: I0321 12:37:08.970427 2259 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 21 12:37:08.975832 kubelet[2259]: I0321 12:37:08.975755 2259 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:37:08.977708 kubelet[2259]: I0321 12:37:08.977065 2259 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:37:08.977708 kubelet[2259]: I0321 12:37:08.977116 2259 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 21 12:37:08.977708 kubelet[2259]: I0321 12:37:08.977385 2259 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:37:08.977708 kubelet[2259]: I0321 12:37:08.977395 2259 container_manager_linux.go:304] "Creating device plugin manager" Mar 21 12:37:08.977899 kubelet[2259]: I0321 12:37:08.977536 2259 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:37:08.980941 kubelet[2259]: I0321 12:37:08.980888 2259 kubelet.go:446] "Attempting to sync node with API server" Mar 21 12:37:08.980941 kubelet[2259]: I0321 12:37:08.980932 2259 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:37:08.981011 kubelet[2259]: I0321 12:37:08.980965 2259 kubelet.go:352] "Adding apiserver pod source" Mar 21 12:37:08.981011 kubelet[2259]: I0321 12:37:08.980979 2259 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:37:08.983587 kubelet[2259]: W0321 12:37:08.983411 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:08.983587 kubelet[2259]: E0321 12:37:08.983479 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:08.984341 kubelet[2259]: I0321 12:37:08.984316 2259 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:37:08.984550 kubelet[2259]: W0321 12:37:08.984516 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:08.984612 kubelet[2259]: E0321 12:37:08.984552 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:08.984764 kubelet[2259]: I0321 12:37:08.984747 2259 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:37:08.985726 kubelet[2259]: W0321 12:37:08.985698 2259 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 21 12:37:08.987476 kubelet[2259]: I0321 12:37:08.987451 2259 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 21 12:37:08.987525 kubelet[2259]: I0321 12:37:08.987494 2259 server.go:1287] "Started kubelet" Mar 21 12:37:08.988174 kubelet[2259]: I0321 12:37:08.988128 2259 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:37:08.989248 kubelet[2259]: I0321 12:37:08.988698 2259 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:37:08.989248 kubelet[2259]: I0321 12:37:08.989032 2259 server.go:490] "Adding debug handlers to kubelet server" Mar 21 12:37:08.989248 kubelet[2259]: I0321 12:37:08.989051 2259 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:37:08.990120 kubelet[2259]: I0321 12:37:08.990075 2259 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:37:08.991042 kubelet[2259]: I0321 12:37:08.990300 2259 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 21 12:37:08.991042 kubelet[2259]: I0321 12:37:08.990386 2259 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:37:08.991042 kubelet[2259]: I0321 12:37:08.990306 2259 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 21 12:37:08.991042 kubelet[2259]: I0321 12:37:08.990483 2259 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:37:08.991042 kubelet[2259]: W0321 12:37:08.990879 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:08.991042 kubelet[2259]: E0321 12:37:08.990915 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:08.991042 kubelet[2259]: E0321 12:37:08.990952 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:08.991272 kubelet[2259]: E0321 12:37:08.991224 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="200ms" Mar 21 12:37:08.991870 kubelet[2259]: I0321 12:37:08.991844 2259 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:37:08.991999 kubelet[2259]: I0321 12:37:08.991918 2259 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:37:08.992270 kubelet[2259]: E0321 12:37:08.992254 2259 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:37:08.992572 kubelet[2259]: E0321 12:37:08.990905 2259 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.113:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.113:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.182ed1a8da61152f default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-03-21 12:37:08.987467055 +0000 UTC m=+0.247891692,LastTimestamp:2025-03-21 12:37:08.987467055 +0000 UTC m=+0.247891692,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Mar 21 12:37:08.993767 kubelet[2259]: I0321 12:37:08.993680 2259 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:37:09.007776 kubelet[2259]: I0321 12:37:09.007671 2259 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 21 12:37:09.007776 kubelet[2259]: I0321 12:37:09.007722 2259 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 21 12:37:09.007776 kubelet[2259]: I0321 12:37:09.007742 2259 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:37:09.009077 kubelet[2259]: I0321 12:37:09.009009 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:37:09.011010 kubelet[2259]: I0321 12:37:09.010970 2259 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:37:09.011010 kubelet[2259]: I0321 12:37:09.011002 2259 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 21 12:37:09.011114 kubelet[2259]: I0321 12:37:09.011028 2259 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 21 12:37:09.011114 kubelet[2259]: I0321 12:37:09.011039 2259 kubelet.go:2388] "Starting kubelet main sync loop" Mar 21 12:37:09.011172 kubelet[2259]: E0321 12:37:09.011100 2259 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:37:09.011809 kubelet[2259]: W0321 12:37:09.011745 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:09.011927 kubelet[2259]: E0321 12:37:09.011809 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:09.091693 kubelet[2259]: E0321 12:37:09.091641 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:09.111853 kubelet[2259]: E0321 12:37:09.111800 2259 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 21 12:37:09.192218 kubelet[2259]: E0321 12:37:09.192160 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:09.192665 kubelet[2259]: E0321 12:37:09.192622 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="400ms" Mar 21 12:37:09.292662 kubelet[2259]: E0321 12:37:09.292499 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:09.302583 kubelet[2259]: I0321 12:37:09.302509 2259 policy_none.go:49] "None policy: Start" Mar 21 12:37:09.302583 kubelet[2259]: I0321 12:37:09.302552 2259 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 21 12:37:09.302583 kubelet[2259]: I0321 12:37:09.302592 2259 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:37:09.310073 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 21 12:37:09.311923 kubelet[2259]: E0321 12:37:09.311885 2259 kubelet.go:2412] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 21 12:37:09.322429 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 21 12:37:09.325279 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 21 12:37:09.336415 kubelet[2259]: I0321 12:37:09.336377 2259 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:37:09.336636 kubelet[2259]: I0321 12:37:09.336610 2259 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 21 12:37:09.336670 kubelet[2259]: I0321 12:37:09.336628 2259 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:37:09.336890 kubelet[2259]: I0321 12:37:09.336866 2259 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:37:09.337904 kubelet[2259]: E0321 12:37:09.337874 2259 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 21 12:37:09.337987 kubelet[2259]: E0321 12:37:09.337916 2259 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Mar 21 12:37:09.438618 kubelet[2259]: I0321 12:37:09.438588 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 21 12:37:09.439007 kubelet[2259]: E0321 12:37:09.438972 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Mar 21 12:37:09.593850 kubelet[2259]: E0321 12:37:09.593697 2259 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.113:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.113:6443: connect: connection refused" interval="800ms" Mar 21 12:37:09.640900 kubelet[2259]: I0321 12:37:09.640853 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 21 12:37:09.641269 kubelet[2259]: E0321 12:37:09.641220 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Mar 21 12:37:09.720558 systemd[1]: Created slice kubepods-burstable-pod000022ced71603ceb3f76b8c44fa5a36.slice - libcontainer container kubepods-burstable-pod000022ced71603ceb3f76b8c44fa5a36.slice. Mar 21 12:37:09.732066 kubelet[2259]: E0321 12:37:09.732026 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:09.734721 systemd[1]: Created slice kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice - libcontainer container kubepods-burstable-podcbbb394ff48414687df77e1bc213eeb5.slice. Mar 21 12:37:09.745504 kubelet[2259]: E0321 12:37:09.745465 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:09.748137 systemd[1]: Created slice kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice - libcontainer container kubepods-burstable-pod3700e556aa2777679a324159272023f1.slice. Mar 21 12:37:09.749678 kubelet[2259]: E0321 12:37:09.749656 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:09.794097 kubelet[2259]: I0321 12:37:09.794054 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/000022ced71603ceb3f76b8c44fa5a36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"000022ced71603ceb3f76b8c44fa5a36\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:09.794097 kubelet[2259]: I0321 12:37:09.794088 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/000022ced71603ceb3f76b8c44fa5a36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"000022ced71603ceb3f76b8c44fa5a36\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:09.794097 kubelet[2259]: I0321 12:37:09.794107 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:09.794612 kubelet[2259]: I0321 12:37:09.794121 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/000022ced71603ceb3f76b8c44fa5a36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"000022ced71603ceb3f76b8c44fa5a36\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:09.794612 kubelet[2259]: I0321 12:37:09.794137 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:09.794612 kubelet[2259]: I0321 12:37:09.794153 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:09.794612 kubelet[2259]: I0321 12:37:09.794170 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:09.794612 kubelet[2259]: I0321 12:37:09.794189 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:09.794761 kubelet[2259]: I0321 12:37:09.794209 2259 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:09.817762 kubelet[2259]: W0321 12:37:09.817722 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:09.817762 kubelet[2259]: E0321 12:37:09.817764 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.113:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:09.820417 kubelet[2259]: W0321 12:37:09.820374 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:09.820450 kubelet[2259]: E0321 12:37:09.820423 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.113:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:09.907310 kubelet[2259]: W0321 12:37:09.906950 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:09.907310 kubelet[2259]: E0321 12:37:09.907063 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.113:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:10.033919 containerd[1520]: time="2025-03-21T12:37:10.033857997Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:000022ced71603ceb3f76b8c44fa5a36,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:10.042868 kubelet[2259]: I0321 12:37:10.042837 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 21 12:37:10.043177 kubelet[2259]: E0321 12:37:10.043152 2259 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://10.0.0.113:6443/api/v1/nodes\": dial tcp 10.0.0.113:6443: connect: connection refused" node="localhost" Mar 21 12:37:10.046665 containerd[1520]: time="2025-03-21T12:37:10.046605676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:10.051330 containerd[1520]: time="2025-03-21T12:37:10.051283499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:10.062137 containerd[1520]: time="2025-03-21T12:37:10.062093811Z" level=info msg="connecting to shim 2017a0d74eb1f5575538dd3f17743feec38ad59e070c495857a830be49631780" address="unix:///run/containerd/s/0b068af3cf7f722f3f073a87de4e83e594a2aa392ab99b39dd67f7cd59be2220" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:10.078656 containerd[1520]: time="2025-03-21T12:37:10.078604201Z" level=info msg="connecting to shim 1d17ef398daa8923e26ecba44e41d3edc083d5aaf8f178543cab18cfc502440b" address="unix:///run/containerd/s/8a7eba191d9120c3c31d2d8484952a4f106644be3647e06fb673bf394cc24b3d" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:10.089406 systemd[1]: Started cri-containerd-2017a0d74eb1f5575538dd3f17743feec38ad59e070c495857a830be49631780.scope - libcontainer container 2017a0d74eb1f5575538dd3f17743feec38ad59e070c495857a830be49631780. Mar 21 12:37:10.091996 containerd[1520]: time="2025-03-21T12:37:10.091865393Z" level=info msg="connecting to shim 8d9d09f5b707ac5ddce463f57c3a23c25d474eec4aedbbd975cf1a5a91c732c9" address="unix:///run/containerd/s/51d195e228d7265da9f9a45c07f4341be56cb08dc073d7a8f28f205bb7fcde13" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:10.113434 systemd[1]: Started cri-containerd-1d17ef398daa8923e26ecba44e41d3edc083d5aaf8f178543cab18cfc502440b.scope - libcontainer container 1d17ef398daa8923e26ecba44e41d3edc083d5aaf8f178543cab18cfc502440b. Mar 21 12:37:10.117216 systemd[1]: Started cri-containerd-8d9d09f5b707ac5ddce463f57c3a23c25d474eec4aedbbd975cf1a5a91c732c9.scope - libcontainer container 8d9d09f5b707ac5ddce463f57c3a23c25d474eec4aedbbd975cf1a5a91c732c9. Mar 21 12:37:10.136633 containerd[1520]: time="2025-03-21T12:37:10.136570870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:000022ced71603ceb3f76b8c44fa5a36,Namespace:kube-system,Attempt:0,} returns sandbox id \"2017a0d74eb1f5575538dd3f17743feec38ad59e070c495857a830be49631780\"" Mar 21 12:37:10.139860 containerd[1520]: time="2025-03-21T12:37:10.139820137Z" level=info msg="CreateContainer within sandbox \"2017a0d74eb1f5575538dd3f17743feec38ad59e070c495857a830be49631780\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 21 12:37:10.149662 containerd[1520]: time="2025-03-21T12:37:10.149620145Z" level=info msg="Container f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:10.157597 containerd[1520]: time="2025-03-21T12:37:10.157495097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:cbbb394ff48414687df77e1bc213eeb5,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d17ef398daa8923e26ecba44e41d3edc083d5aaf8f178543cab18cfc502440b\"" Mar 21 12:37:10.159904 containerd[1520]: time="2025-03-21T12:37:10.159777545Z" level=info msg="CreateContainer within sandbox \"2017a0d74eb1f5575538dd3f17743feec38ad59e070c495857a830be49631780\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e\"" Mar 21 12:37:10.159904 containerd[1520]: time="2025-03-21T12:37:10.159895326Z" level=info msg="CreateContainer within sandbox \"1d17ef398daa8923e26ecba44e41d3edc083d5aaf8f178543cab18cfc502440b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 21 12:37:10.160951 containerd[1520]: time="2025-03-21T12:37:10.160919255Z" level=info msg="StartContainer for \"f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e\"" Mar 21 12:37:10.162041 containerd[1520]: time="2025-03-21T12:37:10.162010161Z" level=info msg="connecting to shim f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e" address="unix:///run/containerd/s/0b068af3cf7f722f3f073a87de4e83e594a2aa392ab99b39dd67f7cd59be2220" protocol=ttrpc version=3 Mar 21 12:37:10.168691 containerd[1520]: time="2025-03-21T12:37:10.168647490Z" level=info msg="Container a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:10.170118 containerd[1520]: time="2025-03-21T12:37:10.170083206Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:3700e556aa2777679a324159272023f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d9d09f5b707ac5ddce463f57c3a23c25d474eec4aedbbd975cf1a5a91c732c9\"" Mar 21 12:37:10.172175 containerd[1520]: time="2025-03-21T12:37:10.172147779Z" level=info msg="CreateContainer within sandbox \"8d9d09f5b707ac5ddce463f57c3a23c25d474eec4aedbbd975cf1a5a91c732c9\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 21 12:37:10.178686 containerd[1520]: time="2025-03-21T12:37:10.178646543Z" level=info msg="CreateContainer within sandbox \"1d17ef398daa8923e26ecba44e41d3edc083d5aaf8f178543cab18cfc502440b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890\"" Mar 21 12:37:10.179001 containerd[1520]: time="2025-03-21T12:37:10.178976984Z" level=info msg="StartContainer for \"a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890\"" Mar 21 12:37:10.179902 containerd[1520]: time="2025-03-21T12:37:10.179878000Z" level=info msg="connecting to shim a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890" address="unix:///run/containerd/s/8a7eba191d9120c3c31d2d8484952a4f106644be3647e06fb673bf394cc24b3d" protocol=ttrpc version=3 Mar 21 12:37:10.181408 systemd[1]: Started cri-containerd-f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e.scope - libcontainer container f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e. Mar 21 12:37:10.184645 containerd[1520]: time="2025-03-21T12:37:10.184597853Z" level=info msg="Container 110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:10.194328 containerd[1520]: time="2025-03-21T12:37:10.194255706Z" level=info msg="CreateContainer within sandbox \"8d9d09f5b707ac5ddce463f57c3a23c25d474eec4aedbbd975cf1a5a91c732c9\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e\"" Mar 21 12:37:10.194895 containerd[1520]: time="2025-03-21T12:37:10.194846084Z" level=info msg="StartContainer for \"110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e\"" Mar 21 12:37:10.196031 containerd[1520]: time="2025-03-21T12:37:10.195995704Z" level=info msg="connecting to shim 110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e" address="unix:///run/containerd/s/51d195e228d7265da9f9a45c07f4341be56cb08dc073d7a8f28f205bb7fcde13" protocol=ttrpc version=3 Mar 21 12:37:10.199062 kubelet[2259]: W0321 12:37:10.199014 2259 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.113:6443: connect: connection refused Mar 21 12:37:10.199121 kubelet[2259]: E0321 12:37:10.199071 2259 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.113:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.113:6443: connect: connection refused" logger="UnhandledError" Mar 21 12:37:10.199443 systemd[1]: Started cri-containerd-a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890.scope - libcontainer container a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890. Mar 21 12:37:10.223369 systemd[1]: Started cri-containerd-110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e.scope - libcontainer container 110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e. Mar 21 12:37:10.246072 containerd[1520]: time="2025-03-21T12:37:10.246026331Z" level=info msg="StartContainer for \"f69ce3e6db0aa4ca25c7160dacece1438ce582081426ab94ae31baee5509137e\" returns successfully" Mar 21 12:37:10.268048 containerd[1520]: time="2025-03-21T12:37:10.268001057Z" level=info msg="StartContainer for \"a992fa3b7c96dde2edea9c48f13fabb6bc420247fcc09eb79a563e14b8d15890\" returns successfully" Mar 21 12:37:10.283081 containerd[1520]: time="2025-03-21T12:37:10.283009386Z" level=info msg="StartContainer for \"110314f7348b81dfb9076fd178b50a9c64bc624b749580f5757702722e877a9e\" returns successfully" Mar 21 12:37:10.851582 kubelet[2259]: I0321 12:37:10.851555 2259 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 21 12:37:11.019062 kubelet[2259]: E0321 12:37:11.018920 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:11.025263 kubelet[2259]: E0321 12:37:11.023545 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:11.026352 kubelet[2259]: E0321 12:37:11.026332 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:11.264354 kubelet[2259]: E0321 12:37:11.264319 2259 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Mar 21 12:37:11.412728 kubelet[2259]: I0321 12:37:11.412681 2259 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 21 12:37:11.412728 kubelet[2259]: E0321 12:37:11.412717 2259 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Mar 21 12:37:11.585963 kubelet[2259]: E0321 12:37:11.585854 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:11.686918 kubelet[2259]: E0321 12:37:11.686880 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:11.787513 kubelet[2259]: E0321 12:37:11.787477 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:11.888283 kubelet[2259]: E0321 12:37:11.888128 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:11.988328 kubelet[2259]: E0321 12:37:11.988267 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.028125 kubelet[2259]: E0321 12:37:12.028103 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:12.028401 kubelet[2259]: E0321 12:37:12.028364 2259 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Mar 21 12:37:12.089003 kubelet[2259]: E0321 12:37:12.088947 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.189754 kubelet[2259]: E0321 12:37:12.189699 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.290359 kubelet[2259]: E0321 12:37:12.290294 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.391246 kubelet[2259]: E0321 12:37:12.391179 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.492315 kubelet[2259]: E0321 12:37:12.492162 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.592621 kubelet[2259]: E0321 12:37:12.592572 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.692885 kubelet[2259]: E0321 12:37:12.692828 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.793526 kubelet[2259]: E0321 12:37:12.793402 2259 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:12.892064 kubelet[2259]: I0321 12:37:12.891981 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:12.898297 kubelet[2259]: I0321 12:37:12.898267 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:12.901822 kubelet[2259]: I0321 12:37:12.901791 2259 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:12.983486 kubelet[2259]: I0321 12:37:12.983437 2259 apiserver.go:52] "Watching apiserver" Mar 21 12:37:12.991047 kubelet[2259]: I0321 12:37:12.990996 2259 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:37:13.664299 systemd[1]: Reload requested from client PID 2536 ('systemctl') (unit session-7.scope)... Mar 21 12:37:13.664315 systemd[1]: Reloading... Mar 21 12:37:13.736256 zram_generator::config[2583]: No configuration found. Mar 21 12:37:14.249876 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 21 12:37:14.365366 systemd[1]: Reloading finished in 700 ms. Mar 21 12:37:14.393051 kubelet[2259]: I0321 12:37:14.392948 2259 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:37:14.393036 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:37:14.420829 systemd[1]: kubelet.service: Deactivated successfully. Mar 21 12:37:14.421179 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:37:14.421265 systemd[1]: kubelet.service: Consumed 722ms CPU time, 127.6M memory peak. Mar 21 12:37:14.424866 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 21 12:37:14.618086 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 21 12:37:14.629585 (kubelet)[2625]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 21 12:37:14.670176 kubelet[2625]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:37:14.670176 kubelet[2625]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Mar 21 12:37:14.670176 kubelet[2625]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 21 12:37:14.670757 kubelet[2625]: I0321 12:37:14.670300 2625 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 21 12:37:14.677115 kubelet[2625]: I0321 12:37:14.677072 2625 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Mar 21 12:37:14.677115 kubelet[2625]: I0321 12:37:14.677102 2625 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 21 12:37:14.677483 kubelet[2625]: I0321 12:37:14.677458 2625 server.go:954] "Client rotation is on, will bootstrap in background" Mar 21 12:37:14.678983 kubelet[2625]: I0321 12:37:14.678961 2625 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 21 12:37:14.684121 kubelet[2625]: I0321 12:37:14.683718 2625 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 21 12:37:14.687719 kubelet[2625]: I0321 12:37:14.687637 2625 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 21 12:37:14.694251 kubelet[2625]: I0321 12:37:14.693537 2625 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 21 12:37:14.694251 kubelet[2625]: I0321 12:37:14.693756 2625 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 21 12:37:14.694251 kubelet[2625]: I0321 12:37:14.693785 2625 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 21 12:37:14.694251 kubelet[2625]: I0321 12:37:14.693944 2625 topology_manager.go:138] "Creating topology manager with none policy" Mar 21 12:37:14.694499 kubelet[2625]: I0321 12:37:14.693953 2625 container_manager_linux.go:304] "Creating device plugin manager" Mar 21 12:37:14.694499 kubelet[2625]: I0321 12:37:14.693991 2625 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:37:14.694499 kubelet[2625]: I0321 12:37:14.694138 2625 kubelet.go:446] "Attempting to sync node with API server" Mar 21 12:37:14.694499 kubelet[2625]: I0321 12:37:14.694153 2625 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 21 12:37:14.694499 kubelet[2625]: I0321 12:37:14.694176 2625 kubelet.go:352] "Adding apiserver pod source" Mar 21 12:37:14.694499 kubelet[2625]: I0321 12:37:14.694192 2625 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 21 12:37:14.695358 kubelet[2625]: I0321 12:37:14.695339 2625 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 21 12:37:14.695798 kubelet[2625]: I0321 12:37:14.695786 2625 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 21 12:37:14.696265 kubelet[2625]: I0321 12:37:14.696253 2625 watchdog_linux.go:99] "Systemd watchdog is not enabled" Mar 21 12:37:14.696347 kubelet[2625]: I0321 12:37:14.696338 2625 server.go:1287] "Started kubelet" Mar 21 12:37:14.696524 kubelet[2625]: I0321 12:37:14.696488 2625 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Mar 21 12:37:14.696798 kubelet[2625]: I0321 12:37:14.696757 2625 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 21 12:37:14.697072 kubelet[2625]: I0321 12:37:14.697060 2625 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 21 12:37:14.699185 kubelet[2625]: I0321 12:37:14.699171 2625 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 21 12:37:14.699820 kubelet[2625]: I0321 12:37:14.699798 2625 server.go:490] "Adding debug handlers to kubelet server" Mar 21 12:37:14.705074 kubelet[2625]: I0321 12:37:14.704203 2625 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Mar 21 12:37:14.707616 kubelet[2625]: I0321 12:37:14.707585 2625 volume_manager.go:297] "Starting Kubelet Volume Manager" Mar 21 12:37:14.707809 kubelet[2625]: E0321 12:37:14.707786 2625 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Mar 21 12:37:14.708445 kubelet[2625]: I0321 12:37:14.708428 2625 factory.go:221] Registration of the systemd container factory successfully Mar 21 12:37:14.708721 kubelet[2625]: I0321 12:37:14.708582 2625 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 21 12:37:14.709567 kubelet[2625]: I0321 12:37:14.709553 2625 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 21 12:37:14.712577 kubelet[2625]: I0321 12:37:14.712521 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 21 12:37:14.713794 kubelet[2625]: I0321 12:37:14.713764 2625 reconciler.go:26] "Reconciler: start to sync state" Mar 21 12:37:14.715948 kubelet[2625]: I0321 12:37:14.715913 2625 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 21 12:37:14.715989 kubelet[2625]: I0321 12:37:14.715951 2625 status_manager.go:227] "Starting to sync pod status with apiserver" Mar 21 12:37:14.715989 kubelet[2625]: I0321 12:37:14.715972 2625 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Mar 21 12:37:14.715989 kubelet[2625]: I0321 12:37:14.715980 2625 kubelet.go:2388] "Starting kubelet main sync loop" Mar 21 12:37:14.716083 kubelet[2625]: E0321 12:37:14.715987 2625 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 21 12:37:14.716083 kubelet[2625]: E0321 12:37:14.716053 2625 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 21 12:37:14.717588 kubelet[2625]: I0321 12:37:14.716934 2625 factory.go:221] Registration of the containerd container factory successfully Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747073 2625 cpu_manager.go:221] "Starting CPU manager" policy="none" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747095 2625 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747116 2625 state_mem.go:36] "Initialized new in-memory state store" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747294 2625 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747304 2625 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747330 2625 policy_none.go:49] "None policy: Start" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747339 2625 memory_manager.go:186] "Starting memorymanager" policy="None" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747349 2625 state_mem.go:35] "Initializing new in-memory state store" Mar 21 12:37:14.747678 kubelet[2625]: I0321 12:37:14.747441 2625 state_mem.go:75] "Updated machine memory state" Mar 21 12:37:14.755343 kubelet[2625]: I0321 12:37:14.754811 2625 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 21 12:37:14.755343 kubelet[2625]: I0321 12:37:14.754997 2625 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 21 12:37:14.755343 kubelet[2625]: I0321 12:37:14.755007 2625 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 21 12:37:14.755507 kubelet[2625]: I0321 12:37:14.755481 2625 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 21 12:37:14.756553 kubelet[2625]: E0321 12:37:14.756359 2625 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Mar 21 12:37:14.817658 kubelet[2625]: I0321 12:37:14.817617 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:14.817803 kubelet[2625]: I0321 12:37:14.817716 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:14.817803 kubelet[2625]: I0321 12:37:14.817737 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:14.824289 kubelet[2625]: E0321 12:37:14.824267 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:14.824545 kubelet[2625]: E0321 12:37:14.824507 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:14.824915 kubelet[2625]: E0321 12:37:14.824902 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:14.857601 kubelet[2625]: I0321 12:37:14.857568 2625 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Mar 21 12:37:14.862262 kubelet[2625]: I0321 12:37:14.862220 2625 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Mar 21 12:37:14.862332 kubelet[2625]: I0321 12:37:14.862293 2625 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Mar 21 12:37:14.914819 kubelet[2625]: I0321 12:37:14.914770 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:14.914819 kubelet[2625]: I0321 12:37:14.914821 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:14.914995 kubelet[2625]: I0321 12:37:14.914842 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/000022ced71603ceb3f76b8c44fa5a36-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"000022ced71603ceb3f76b8c44fa5a36\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:14.914995 kubelet[2625]: I0321 12:37:14.914860 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3700e556aa2777679a324159272023f1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"3700e556aa2777679a324159272023f1\") " pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:14.914995 kubelet[2625]: I0321 12:37:14.914877 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/000022ced71603ceb3f76b8c44fa5a36-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"000022ced71603ceb3f76b8c44fa5a36\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:14.914995 kubelet[2625]: I0321 12:37:14.914894 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/000022ced71603ceb3f76b8c44fa5a36-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"000022ced71603ceb3f76b8c44fa5a36\") " pod="kube-system/kube-apiserver-localhost" Mar 21 12:37:14.914995 kubelet[2625]: I0321 12:37:14.914909 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:14.915143 kubelet[2625]: I0321 12:37:14.914926 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:14.915143 kubelet[2625]: I0321 12:37:14.914941 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cbbb394ff48414687df77e1bc213eeb5-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"cbbb394ff48414687df77e1bc213eeb5\") " pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:15.696055 kubelet[2625]: I0321 12:37:15.695470 2625 apiserver.go:52] "Watching apiserver" Mar 21 12:37:15.711289 kubelet[2625]: I0321 12:37:15.711192 2625 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 21 12:37:15.731824 kubelet[2625]: I0321 12:37:15.731786 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:15.733698 kubelet[2625]: I0321 12:37:15.732167 2625 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:15.742975 kubelet[2625]: E0321 12:37:15.742631 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Mar 21 12:37:15.742975 kubelet[2625]: E0321 12:37:15.742904 2625 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Mar 21 12:37:15.751981 kubelet[2625]: I0321 12:37:15.751918 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.751783643 podStartE2EDuration="3.751783643s" podCreationTimestamp="2025-03-21 12:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:15.751307774 +0000 UTC m=+1.117652786" watchObservedRunningTime="2025-03-21 12:37:15.751783643 +0000 UTC m=+1.118128656" Mar 21 12:37:15.768560 kubelet[2625]: I0321 12:37:15.768495 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=3.76847335 podStartE2EDuration="3.76847335s" podCreationTimestamp="2025-03-21 12:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:15.760301283 +0000 UTC m=+1.126646295" watchObservedRunningTime="2025-03-21 12:37:15.76847335 +0000 UTC m=+1.134818362" Mar 21 12:37:15.778186 kubelet[2625]: I0321 12:37:15.778131 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.778106758 podStartE2EDuration="3.778106758s" podCreationTimestamp="2025-03-21 12:37:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:15.768864627 +0000 UTC m=+1.135209639" watchObservedRunningTime="2025-03-21 12:37:15.778106758 +0000 UTC m=+1.144451770" Mar 21 12:37:19.316776 sudo[1714]: pam_unix(sudo:session): session closed for user root Mar 21 12:37:19.318307 sshd[1713]: Connection closed by 10.0.0.1 port 58992 Mar 21 12:37:19.318813 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:19.322889 systemd[1]: sshd@6-10.0.0.113:22-10.0.0.1:58992.service: Deactivated successfully. Mar 21 12:37:19.324985 systemd[1]: session-7.scope: Deactivated successfully. Mar 21 12:37:19.325222 systemd[1]: session-7.scope: Consumed 4.554s CPU time, 213.4M memory peak. Mar 21 12:37:19.326407 systemd-logind[1503]: Session 7 logged out. Waiting for processes to exit. Mar 21 12:37:19.327156 systemd-logind[1503]: Removed session 7. Mar 21 12:37:20.311820 kubelet[2625]: I0321 12:37:20.311769 2625 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 21 12:37:20.312218 containerd[1520]: time="2025-03-21T12:37:20.312131276Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 21 12:37:20.312494 kubelet[2625]: I0321 12:37:20.312326 2625 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 21 12:37:21.130839 systemd[1]: Created slice kubepods-besteffort-pode91e78a7_cf8f_4007_bef7_6f62b5db4605.slice - libcontainer container kubepods-besteffort-pode91e78a7_cf8f_4007_bef7_6f62b5db4605.slice. Mar 21 12:37:21.154085 kubelet[2625]: I0321 12:37:21.154016 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e91e78a7-cf8f-4007-bef7-6f62b5db4605-xtables-lock\") pod \"kube-proxy-6tf6p\" (UID: \"e91e78a7-cf8f-4007-bef7-6f62b5db4605\") " pod="kube-system/kube-proxy-6tf6p" Mar 21 12:37:21.154085 kubelet[2625]: I0321 12:37:21.154068 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e91e78a7-cf8f-4007-bef7-6f62b5db4605-lib-modules\") pod \"kube-proxy-6tf6p\" (UID: \"e91e78a7-cf8f-4007-bef7-6f62b5db4605\") " pod="kube-system/kube-proxy-6tf6p" Mar 21 12:37:21.154085 kubelet[2625]: I0321 12:37:21.154093 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e91e78a7-cf8f-4007-bef7-6f62b5db4605-kube-proxy\") pod \"kube-proxy-6tf6p\" (UID: \"e91e78a7-cf8f-4007-bef7-6f62b5db4605\") " pod="kube-system/kube-proxy-6tf6p" Mar 21 12:37:21.154328 kubelet[2625]: I0321 12:37:21.154115 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qcc9m\" (UniqueName: \"kubernetes.io/projected/e91e78a7-cf8f-4007-bef7-6f62b5db4605-kube-api-access-qcc9m\") pod \"kube-proxy-6tf6p\" (UID: \"e91e78a7-cf8f-4007-bef7-6f62b5db4605\") " pod="kube-system/kube-proxy-6tf6p" Mar 21 12:37:21.442267 containerd[1520]: time="2025-03-21T12:37:21.440036906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6tf6p,Uid:e91e78a7-cf8f-4007-bef7-6f62b5db4605,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:21.631249 systemd[1]: Created slice kubepods-besteffort-pod4d658934_2f3b_46ea_b1c5_7ad2213956b9.slice - libcontainer container kubepods-besteffort-pod4d658934_2f3b_46ea_b1c5_7ad2213956b9.slice. Mar 21 12:37:21.657849 kubelet[2625]: I0321 12:37:21.657802 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pz9tx\" (UniqueName: \"kubernetes.io/projected/4d658934-2f3b-46ea-b1c5-7ad2213956b9-kube-api-access-pz9tx\") pod \"tigera-operator-ccfc44587-h4bmx\" (UID: \"4d658934-2f3b-46ea-b1c5-7ad2213956b9\") " pod="tigera-operator/tigera-operator-ccfc44587-h4bmx" Mar 21 12:37:21.657849 kubelet[2625]: I0321 12:37:21.657845 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4d658934-2f3b-46ea-b1c5-7ad2213956b9-var-lib-calico\") pod \"tigera-operator-ccfc44587-h4bmx\" (UID: \"4d658934-2f3b-46ea-b1c5-7ad2213956b9\") " pod="tigera-operator/tigera-operator-ccfc44587-h4bmx" Mar 21 12:37:21.859516 containerd[1520]: time="2025-03-21T12:37:21.859392638Z" level=info msg="connecting to shim 0a7515d0d69628a52f828708e6299d51209a3dc555f03f5b3d79f59188160592" address="unix:///run/containerd/s/14b379ce1cb0c4cf81932bf63db2ff624f75f018f3d453f8e9c0087ad59e87ca" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:21.910378 systemd[1]: Started cri-containerd-0a7515d0d69628a52f828708e6299d51209a3dc555f03f5b3d79f59188160592.scope - libcontainer container 0a7515d0d69628a52f828708e6299d51209a3dc555f03f5b3d79f59188160592. Mar 21 12:37:21.934649 containerd[1520]: time="2025-03-21T12:37:21.934599297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-h4bmx,Uid:4d658934-2f3b-46ea-b1c5-7ad2213956b9,Namespace:tigera-operator,Attempt:0,}" Mar 21 12:37:21.967952 containerd[1520]: time="2025-03-21T12:37:21.967902933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6tf6p,Uid:e91e78a7-cf8f-4007-bef7-6f62b5db4605,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a7515d0d69628a52f828708e6299d51209a3dc555f03f5b3d79f59188160592\"" Mar 21 12:37:21.970220 containerd[1520]: time="2025-03-21T12:37:21.970177852Z" level=info msg="CreateContainer within sandbox \"0a7515d0d69628a52f828708e6299d51209a3dc555f03f5b3d79f59188160592\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 21 12:37:21.988214 containerd[1520]: time="2025-03-21T12:37:21.988168007Z" level=info msg="Container 6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:22.003797 containerd[1520]: time="2025-03-21T12:37:22.003705732Z" level=info msg="CreateContainer within sandbox \"0a7515d0d69628a52f828708e6299d51209a3dc555f03f5b3d79f59188160592\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099\"" Mar 21 12:37:22.008324 containerd[1520]: time="2025-03-21T12:37:22.008258527Z" level=info msg="StartContainer for \"6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099\"" Mar 21 12:37:22.009674 containerd[1520]: time="2025-03-21T12:37:22.009417532Z" level=info msg="connecting to shim b2d8681a47680a98b3d3e438f1c4cd4ceb57440f9c154f4d07b6a1f62dd16add" address="unix:///run/containerd/s/11d2d6cf09fbd4561206ab6d6587a79bf23e183b6acf15597051fe2a7bbeb4a2" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:22.010813 containerd[1520]: time="2025-03-21T12:37:22.010771318Z" level=info msg="connecting to shim 6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099" address="unix:///run/containerd/s/14b379ce1cb0c4cf81932bf63db2ff624f75f018f3d453f8e9c0087ad59e87ca" protocol=ttrpc version=3 Mar 21 12:37:22.036423 systemd[1]: Started cri-containerd-6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099.scope - libcontainer container 6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099. Mar 21 12:37:22.038288 systemd[1]: Started cri-containerd-b2d8681a47680a98b3d3e438f1c4cd4ceb57440f9c154f4d07b6a1f62dd16add.scope - libcontainer container b2d8681a47680a98b3d3e438f1c4cd4ceb57440f9c154f4d07b6a1f62dd16add. Mar 21 12:37:22.088181 containerd[1520]: time="2025-03-21T12:37:22.088150199Z" level=info msg="StartContainer for \"6550e8ff994deadb274646dc5583c6846c7fa35fcb7c7f22130eb1353167d099\" returns successfully" Mar 21 12:37:22.090004 containerd[1520]: time="2025-03-21T12:37:22.089479102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-ccfc44587-h4bmx,Uid:4d658934-2f3b-46ea-b1c5-7ad2213956b9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"b2d8681a47680a98b3d3e438f1c4cd4ceb57440f9c154f4d07b6a1f62dd16add\"" Mar 21 12:37:22.091511 containerd[1520]: time="2025-03-21T12:37:22.091464812Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 21 12:37:22.293177 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount794793877.mount: Deactivated successfully. Mar 21 12:37:22.751277 kubelet[2625]: I0321 12:37:22.751083 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-6tf6p" podStartSLOduration=1.7510640830000002 podStartE2EDuration="1.751064083s" podCreationTimestamp="2025-03-21 12:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:22.750881488 +0000 UTC m=+8.117226500" watchObservedRunningTime="2025-03-21 12:37:22.751064083 +0000 UTC m=+8.117409095" Mar 21 12:37:23.390020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2427280181.mount: Deactivated successfully. Mar 21 12:37:24.319007 containerd[1520]: time="2025-03-21T12:37:24.318944321Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:24.329262 containerd[1520]: time="2025-03-21T12:37:24.329165133Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 21 12:37:24.339063 containerd[1520]: time="2025-03-21T12:37:24.339030404Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:24.353692 containerd[1520]: time="2025-03-21T12:37:24.353649398Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:24.354378 containerd[1520]: time="2025-03-21T12:37:24.354328144Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 2.262812656s" Mar 21 12:37:24.354378 containerd[1520]: time="2025-03-21T12:37:24.354373438Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 21 12:37:24.356053 containerd[1520]: time="2025-03-21T12:37:24.356021615Z" level=info msg="CreateContainer within sandbox \"b2d8681a47680a98b3d3e438f1c4cd4ceb57440f9c154f4d07b6a1f62dd16add\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 21 12:37:24.442739 containerd[1520]: time="2025-03-21T12:37:24.442699026Z" level=info msg="Container 7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:24.500205 containerd[1520]: time="2025-03-21T12:37:24.500139212Z" level=info msg="CreateContainer within sandbox \"b2d8681a47680a98b3d3e438f1c4cd4ceb57440f9c154f4d07b6a1f62dd16add\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5\"" Mar 21 12:37:24.500826 containerd[1520]: time="2025-03-21T12:37:24.500768584Z" level=info msg="StartContainer for \"7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5\"" Mar 21 12:37:24.501701 containerd[1520]: time="2025-03-21T12:37:24.501677199Z" level=info msg="connecting to shim 7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5" address="unix:///run/containerd/s/11d2d6cf09fbd4561206ab6d6587a79bf23e183b6acf15597051fe2a7bbeb4a2" protocol=ttrpc version=3 Mar 21 12:37:24.522462 systemd[1]: Started cri-containerd-7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5.scope - libcontainer container 7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5. Mar 21 12:37:24.570014 containerd[1520]: time="2025-03-21T12:37:24.569550520Z" level=info msg="StartContainer for \"7245d5870550491b04bc967c1ec171cdf2f115d242ec971818ddabcbb8df5af5\" returns successfully" Mar 21 12:37:24.756126 kubelet[2625]: I0321 12:37:24.755950 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-ccfc44587-h4bmx" podStartSLOduration=1.491852015 podStartE2EDuration="3.755934752s" podCreationTimestamp="2025-03-21 12:37:21 +0000 UTC" firstStartedPulling="2025-03-21 12:37:22.090966721 +0000 UTC m=+7.457311733" lastFinishedPulling="2025-03-21 12:37:24.355049458 +0000 UTC m=+9.721394470" observedRunningTime="2025-03-21 12:37:24.75584788 +0000 UTC m=+10.122192912" watchObservedRunningTime="2025-03-21 12:37:24.755934752 +0000 UTC m=+10.122279764" Mar 21 12:37:27.485565 systemd[1]: Created slice kubepods-besteffort-pod47303f4f_c18d_436b_8568_d442ff58509c.slice - libcontainer container kubepods-besteffort-pod47303f4f_c18d_436b_8568_d442ff58509c.slice. Mar 21 12:37:27.493572 kubelet[2625]: I0321 12:37:27.493534 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/47303f4f-c18d-436b-8568-d442ff58509c-tigera-ca-bundle\") pod \"calico-typha-fd89bdd5f-zhl9s\" (UID: \"47303f4f-c18d-436b-8568-d442ff58509c\") " pod="calico-system/calico-typha-fd89bdd5f-zhl9s" Mar 21 12:37:27.493572 kubelet[2625]: I0321 12:37:27.493569 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/47303f4f-c18d-436b-8568-d442ff58509c-typha-certs\") pod \"calico-typha-fd89bdd5f-zhl9s\" (UID: \"47303f4f-c18d-436b-8568-d442ff58509c\") " pod="calico-system/calico-typha-fd89bdd5f-zhl9s" Mar 21 12:37:27.494012 kubelet[2625]: I0321 12:37:27.493624 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6l6tv\" (UniqueName: \"kubernetes.io/projected/47303f4f-c18d-436b-8568-d442ff58509c-kube-api-access-6l6tv\") pod \"calico-typha-fd89bdd5f-zhl9s\" (UID: \"47303f4f-c18d-436b-8568-d442ff58509c\") " pod="calico-system/calico-typha-fd89bdd5f-zhl9s" Mar 21 12:37:27.533597 systemd[1]: Created slice kubepods-besteffort-pod777ef011_bdb9_42ca_94e4_8719552b8348.slice - libcontainer container kubepods-besteffort-pod777ef011_bdb9_42ca_94e4_8719552b8348.slice. Mar 21 12:37:27.594662 kubelet[2625]: I0321 12:37:27.594609 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgm4g\" (UniqueName: \"kubernetes.io/projected/777ef011-bdb9-42ca-94e4-8719552b8348-kube-api-access-mgm4g\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.594662 kubelet[2625]: I0321 12:37:27.594653 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-var-run-calico\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.594847 kubelet[2625]: I0321 12:37:27.594685 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-xtables-lock\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.594847 kubelet[2625]: I0321 12:37:27.594703 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/777ef011-bdb9-42ca-94e4-8719552b8348-tigera-ca-bundle\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.594847 kubelet[2625]: I0321 12:37:27.594721 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-cni-net-dir\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.594847 kubelet[2625]: I0321 12:37:27.594751 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-policysync\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.594940 kubelet[2625]: I0321 12:37:27.594918 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/777ef011-bdb9-42ca-94e4-8719552b8348-node-certs\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.595515 kubelet[2625]: I0321 12:37:27.595484 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-cni-bin-dir\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.595597 kubelet[2625]: I0321 12:37:27.595581 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-var-lib-calico\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.595631 kubelet[2625]: I0321 12:37:27.595607 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-cni-log-dir\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.595681 kubelet[2625]: I0321 12:37:27.595634 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-lib-modules\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.595681 kubelet[2625]: I0321 12:37:27.595655 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/777ef011-bdb9-42ca-94e4-8719552b8348-flexvol-driver-host\") pod \"calico-node-4cz5c\" (UID: \"777ef011-bdb9-42ca-94e4-8719552b8348\") " pod="calico-system/calico-node-4cz5c" Mar 21 12:37:27.636010 kubelet[2625]: E0321 12:37:27.635963 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:27.696226 kubelet[2625]: I0321 12:37:27.696191 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/fd2a490d-bd62-49bd-ba85-983f4d907bf7-registration-dir\") pod \"csi-node-driver-m44lj\" (UID: \"fd2a490d-bd62-49bd-ba85-983f4d907bf7\") " pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:27.696226 kubelet[2625]: I0321 12:37:27.696224 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jktj9\" (UniqueName: \"kubernetes.io/projected/fd2a490d-bd62-49bd-ba85-983f4d907bf7-kube-api-access-jktj9\") pod \"csi-node-driver-m44lj\" (UID: \"fd2a490d-bd62-49bd-ba85-983f4d907bf7\") " pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:27.696398 kubelet[2625]: I0321 12:37:27.696268 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/fd2a490d-bd62-49bd-ba85-983f4d907bf7-varrun\") pod \"csi-node-driver-m44lj\" (UID: \"fd2a490d-bd62-49bd-ba85-983f4d907bf7\") " pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:27.696398 kubelet[2625]: I0321 12:37:27.696327 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/fd2a490d-bd62-49bd-ba85-983f4d907bf7-socket-dir\") pod \"csi-node-driver-m44lj\" (UID: \"fd2a490d-bd62-49bd-ba85-983f4d907bf7\") " pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:27.696480 kubelet[2625]: I0321 12:37:27.696464 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fd2a490d-bd62-49bd-ba85-983f4d907bf7-kubelet-dir\") pod \"csi-node-driver-m44lj\" (UID: \"fd2a490d-bd62-49bd-ba85-983f4d907bf7\") " pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:27.698342 kubelet[2625]: E0321 12:37:27.698302 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.698342 kubelet[2625]: W0321 12:37:27.698331 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.698452 kubelet[2625]: E0321 12:37:27.698353 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.700396 kubelet[2625]: E0321 12:37:27.700373 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.700396 kubelet[2625]: W0321 12:37:27.700392 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.700462 kubelet[2625]: E0321 12:37:27.700407 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.703639 kubelet[2625]: E0321 12:37:27.703604 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.703639 kubelet[2625]: W0321 12:37:27.703620 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.703730 kubelet[2625]: E0321 12:37:27.703708 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.788297 containerd[1520]: time="2025-03-21T12:37:27.788165671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd89bdd5f-zhl9s,Uid:47303f4f-c18d-436b-8568-d442ff58509c,Namespace:calico-system,Attempt:0,}" Mar 21 12:37:27.797770 kubelet[2625]: E0321 12:37:27.797737 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.797770 kubelet[2625]: W0321 12:37:27.797759 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.797909 kubelet[2625]: E0321 12:37:27.797781 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.798115 kubelet[2625]: E0321 12:37:27.798092 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.798143 kubelet[2625]: W0321 12:37:27.798113 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.798168 kubelet[2625]: E0321 12:37:27.798139 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.798455 kubelet[2625]: E0321 12:37:27.798436 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.798455 kubelet[2625]: W0321 12:37:27.798450 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.798507 kubelet[2625]: E0321 12:37:27.798467 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.798713 kubelet[2625]: E0321 12:37:27.798697 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.798713 kubelet[2625]: W0321 12:37:27.798708 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.798765 kubelet[2625]: E0321 12:37:27.798723 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.798993 kubelet[2625]: E0321 12:37:27.798977 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.798993 kubelet[2625]: W0321 12:37:27.798989 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.799044 kubelet[2625]: E0321 12:37:27.799004 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.799262 kubelet[2625]: E0321 12:37:27.799244 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.799262 kubelet[2625]: W0321 12:37:27.799257 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.799315 kubelet[2625]: E0321 12:37:27.799290 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.799477 kubelet[2625]: E0321 12:37:27.799465 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.799504 kubelet[2625]: W0321 12:37:27.799476 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.799528 kubelet[2625]: E0321 12:37:27.799505 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.799689 kubelet[2625]: E0321 12:37:27.799677 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.799720 kubelet[2625]: W0321 12:37:27.799688 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.799744 kubelet[2625]: E0321 12:37:27.799717 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.799913 kubelet[2625]: E0321 12:37:27.799901 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.799942 kubelet[2625]: W0321 12:37:27.799913 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.799963 kubelet[2625]: E0321 12:37:27.799942 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.800123 kubelet[2625]: E0321 12:37:27.800111 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.800148 kubelet[2625]: W0321 12:37:27.800122 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.800172 kubelet[2625]: E0321 12:37:27.800150 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.800352 kubelet[2625]: E0321 12:37:27.800339 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.800378 kubelet[2625]: W0321 12:37:27.800351 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.800378 kubelet[2625]: E0321 12:37:27.800365 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.800570 kubelet[2625]: E0321 12:37:27.800558 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.800595 kubelet[2625]: W0321 12:37:27.800569 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.800595 kubelet[2625]: E0321 12:37:27.800582 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.800891 kubelet[2625]: E0321 12:37:27.800870 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.800891 kubelet[2625]: W0321 12:37:27.800883 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.800940 kubelet[2625]: E0321 12:37:27.800898 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.801116 kubelet[2625]: E0321 12:37:27.801101 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.801116 kubelet[2625]: W0321 12:37:27.801111 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.801165 kubelet[2625]: E0321 12:37:27.801149 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.801359 kubelet[2625]: E0321 12:37:27.801342 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.801359 kubelet[2625]: W0321 12:37:27.801354 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.801411 kubelet[2625]: E0321 12:37:27.801385 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.801568 kubelet[2625]: E0321 12:37:27.801556 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.801568 kubelet[2625]: W0321 12:37:27.801567 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.801624 kubelet[2625]: E0321 12:37:27.801595 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.801791 kubelet[2625]: E0321 12:37:27.801772 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.801816 kubelet[2625]: W0321 12:37:27.801782 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.801841 kubelet[2625]: E0321 12:37:27.801820 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.802052 kubelet[2625]: E0321 12:37:27.802025 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.802052 kubelet[2625]: W0321 12:37:27.802035 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.802106 kubelet[2625]: E0321 12:37:27.802066 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.802285 kubelet[2625]: E0321 12:37:27.802272 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.802285 kubelet[2625]: W0321 12:37:27.802284 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.802348 kubelet[2625]: E0321 12:37:27.802298 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.802551 kubelet[2625]: E0321 12:37:27.802534 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.802551 kubelet[2625]: W0321 12:37:27.802549 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.802606 kubelet[2625]: E0321 12:37:27.802564 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.802831 kubelet[2625]: E0321 12:37:27.802812 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.802831 kubelet[2625]: W0321 12:37:27.802826 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.802982 kubelet[2625]: E0321 12:37:27.802843 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.803093 kubelet[2625]: E0321 12:37:27.803080 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.803093 kubelet[2625]: W0321 12:37:27.803092 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.803143 kubelet[2625]: E0321 12:37:27.803123 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.803334 kubelet[2625]: E0321 12:37:27.803314 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.803334 kubelet[2625]: W0321 12:37:27.803328 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.803472 kubelet[2625]: E0321 12:37:27.803360 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.803585 kubelet[2625]: E0321 12:37:27.803555 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.803585 kubelet[2625]: W0321 12:37:27.803581 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.803652 kubelet[2625]: E0321 12:37:27.803598 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.803919 kubelet[2625]: E0321 12:37:27.803901 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.803919 kubelet[2625]: W0321 12:37:27.803914 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.804004 kubelet[2625]: E0321 12:37:27.803924 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.812102 kubelet[2625]: E0321 12:37:27.812074 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:27.812102 kubelet[2625]: W0321 12:37:27.812094 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:27.812102 kubelet[2625]: E0321 12:37:27.812110 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:27.836845 containerd[1520]: time="2025-03-21T12:37:27.836801871Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4cz5c,Uid:777ef011-bdb9-42ca-94e4-8719552b8348,Namespace:calico-system,Attempt:0,}" Mar 21 12:37:27.873185 containerd[1520]: time="2025-03-21T12:37:27.873102214Z" level=info msg="connecting to shim c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4" address="unix:///run/containerd/s/68d41387a4e8872819428fbd7030da92c595694187b2d1689e3859a0ded7eb38" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:27.878775 containerd[1520]: time="2025-03-21T12:37:27.878724873Z" level=info msg="connecting to shim 7b775f603e78dec2a95a7f64f1cb242c722d33af1d2b1fb8516cf839a85c8a30" address="unix:///run/containerd/s/c3b89a321007cbcba0347df90955d9a0b9bafee3e40fd5fd7669dfe2d0f4b325" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:27.904403 systemd[1]: Started cri-containerd-c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4.scope - libcontainer container c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4. Mar 21 12:37:27.908842 systemd[1]: Started cri-containerd-7b775f603e78dec2a95a7f64f1cb242c722d33af1d2b1fb8516cf839a85c8a30.scope - libcontainer container 7b775f603e78dec2a95a7f64f1cb242c722d33af1d2b1fb8516cf839a85c8a30. Mar 21 12:37:28.021272 containerd[1520]: time="2025-03-21T12:37:28.021204036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4cz5c,Uid:777ef011-bdb9-42ca-94e4-8719552b8348,Namespace:calico-system,Attempt:0,} returns sandbox id \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\"" Mar 21 12:37:28.023120 containerd[1520]: time="2025-03-21T12:37:28.022539988Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 21 12:37:28.026188 containerd[1520]: time="2025-03-21T12:37:28.026153999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-fd89bdd5f-zhl9s,Uid:47303f4f-c18d-436b-8568-d442ff58509c,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b775f603e78dec2a95a7f64f1cb242c722d33af1d2b1fb8516cf839a85c8a30\"" Mar 21 12:37:28.892697 kubelet[2625]: E0321 12:37:28.892658 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.892697 kubelet[2625]: W0321 12:37:28.892681 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.892697 kubelet[2625]: E0321 12:37:28.892702 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.893174 kubelet[2625]: E0321 12:37:28.892920 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.893200 kubelet[2625]: W0321 12:37:28.893178 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.893200 kubelet[2625]: E0321 12:37:28.893193 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.897258 kubelet[2625]: E0321 12:37:28.897207 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.897258 kubelet[2625]: W0321 12:37:28.897252 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.897337 kubelet[2625]: E0321 12:37:28.897276 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.897580 kubelet[2625]: E0321 12:37:28.897557 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.897580 kubelet[2625]: W0321 12:37:28.897568 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.897580 kubelet[2625]: E0321 12:37:28.897578 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.897821 kubelet[2625]: E0321 12:37:28.897807 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.897821 kubelet[2625]: W0321 12:37:28.897819 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.897873 kubelet[2625]: E0321 12:37:28.897829 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.898054 kubelet[2625]: E0321 12:37:28.898042 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.898054 kubelet[2625]: W0321 12:37:28.898051 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.898111 kubelet[2625]: E0321 12:37:28.898059 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.898276 kubelet[2625]: E0321 12:37:28.898262 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.898276 kubelet[2625]: W0321 12:37:28.898274 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.898353 kubelet[2625]: E0321 12:37:28.898284 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.898503 kubelet[2625]: E0321 12:37:28.898491 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.898543 kubelet[2625]: W0321 12:37:28.898503 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.898543 kubelet[2625]: E0321 12:37:28.898512 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.898753 kubelet[2625]: E0321 12:37:28.898740 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.898782 kubelet[2625]: W0321 12:37:28.898753 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.898782 kubelet[2625]: E0321 12:37:28.898763 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.899014 kubelet[2625]: E0321 12:37:28.898999 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.899014 kubelet[2625]: W0321 12:37:28.899012 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.899068 kubelet[2625]: E0321 12:37:28.899023 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.899273 kubelet[2625]: E0321 12:37:28.899260 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.899273 kubelet[2625]: W0321 12:37:28.899272 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.899326 kubelet[2625]: E0321 12:37:28.899281 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.899509 kubelet[2625]: E0321 12:37:28.899497 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.899538 kubelet[2625]: W0321 12:37:28.899508 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.899538 kubelet[2625]: E0321 12:37:28.899518 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.899735 kubelet[2625]: E0321 12:37:28.899721 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.899769 kubelet[2625]: W0321 12:37:28.899735 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.899769 kubelet[2625]: E0321 12:37:28.899744 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.899976 kubelet[2625]: E0321 12:37:28.899963 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.900010 kubelet[2625]: W0321 12:37:28.899975 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.900010 kubelet[2625]: E0321 12:37:28.899986 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.900196 kubelet[2625]: E0321 12:37:28.900182 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.900196 kubelet[2625]: W0321 12:37:28.900195 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.900269 kubelet[2625]: E0321 12:37:28.900205 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.900457 kubelet[2625]: E0321 12:37:28.900444 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.900489 kubelet[2625]: W0321 12:37:28.900457 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.900489 kubelet[2625]: E0321 12:37:28.900467 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.900674 kubelet[2625]: E0321 12:37:28.900661 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.900674 kubelet[2625]: W0321 12:37:28.900672 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.900732 kubelet[2625]: E0321 12:37:28.900692 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.900923 kubelet[2625]: E0321 12:37:28.900910 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.900923 kubelet[2625]: W0321 12:37:28.900921 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.900979 kubelet[2625]: E0321 12:37:28.900930 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.901121 kubelet[2625]: E0321 12:37:28.901108 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.901121 kubelet[2625]: W0321 12:37:28.901119 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.901176 kubelet[2625]: E0321 12:37:28.901128 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.901365 kubelet[2625]: E0321 12:37:28.901352 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.901365 kubelet[2625]: W0321 12:37:28.901362 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.901418 kubelet[2625]: E0321 12:37:28.901371 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.901588 kubelet[2625]: E0321 12:37:28.901577 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.901618 kubelet[2625]: W0321 12:37:28.901587 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.901618 kubelet[2625]: E0321 12:37:28.901596 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.901795 kubelet[2625]: E0321 12:37:28.901784 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.901795 kubelet[2625]: W0321 12:37:28.901794 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.901843 kubelet[2625]: E0321 12:37:28.901802 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.902019 kubelet[2625]: E0321 12:37:28.902007 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.902049 kubelet[2625]: W0321 12:37:28.902019 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.902049 kubelet[2625]: E0321 12:37:28.902029 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.902241 kubelet[2625]: E0321 12:37:28.902215 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.902279 kubelet[2625]: W0321 12:37:28.902246 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.902279 kubelet[2625]: E0321 12:37:28.902257 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:28.902492 kubelet[2625]: E0321 12:37:28.902474 2625 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 21 12:37:28.902492 kubelet[2625]: W0321 12:37:28.902485 2625 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 21 12:37:28.902492 kubelet[2625]: E0321 12:37:28.902494 2625 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 21 12:37:29.550609 containerd[1520]: time="2025-03-21T12:37:29.550554959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:29.551317 containerd[1520]: time="2025-03-21T12:37:29.551266628Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 21 12:37:29.552342 containerd[1520]: time="2025-03-21T12:37:29.552304723Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:29.554124 containerd[1520]: time="2025-03-21T12:37:29.554095550Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:29.554707 containerd[1520]: time="2025-03-21T12:37:29.554673907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.531552179s" Mar 21 12:37:29.554707 containerd[1520]: time="2025-03-21T12:37:29.554702345Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 21 12:37:29.555570 containerd[1520]: time="2025-03-21T12:37:29.555542105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 21 12:37:29.557392 containerd[1520]: time="2025-03-21T12:37:29.557359235Z" level=info msg="CreateContainer within sandbox \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 21 12:37:29.567422 containerd[1520]: time="2025-03-21T12:37:29.567377907Z" level=info msg="Container b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:29.575302 containerd[1520]: time="2025-03-21T12:37:29.575267613Z" level=info msg="CreateContainer within sandbox \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\"" Mar 21 12:37:29.575822 containerd[1520]: time="2025-03-21T12:37:29.575773173Z" level=info msg="StartContainer for \"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\"" Mar 21 12:37:29.577249 containerd[1520]: time="2025-03-21T12:37:29.577205118Z" level=info msg="connecting to shim b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a" address="unix:///run/containerd/s/68d41387a4e8872819428fbd7030da92c595694187b2d1689e3859a0ded7eb38" protocol=ttrpc version=3 Mar 21 12:37:29.609443 systemd[1]: Started cri-containerd-b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a.scope - libcontainer container b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a. Mar 21 12:37:29.664594 systemd[1]: cri-containerd-b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a.scope: Deactivated successfully. Mar 21 12:37:29.668267 containerd[1520]: time="2025-03-21T12:37:29.668216869Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\" id:\"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\" pid:3189 exited_at:{seconds:1742560649 nanos:667733625}" Mar 21 12:37:29.683021 containerd[1520]: time="2025-03-21T12:37:29.682988079Z" level=info msg="received exit event container_id:\"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\" id:\"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\" pid:3189 exited_at:{seconds:1742560649 nanos:667733625}" Mar 21 12:37:29.684586 containerd[1520]: time="2025-03-21T12:37:29.684555822Z" level=info msg="StartContainer for \"b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a\" returns successfully" Mar 21 12:37:29.705044 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1fa0c9f997715640f6da958ad08d3fd0ec9ce92728ac3f6ae985721a08fb48a-rootfs.mount: Deactivated successfully. Mar 21 12:37:29.717291 kubelet[2625]: E0321 12:37:29.717237 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:30.924347 update_engine[1508]: I20250321 12:37:30.924274 1508 update_attempter.cc:509] Updating boot flags... Mar 21 12:37:30.958283 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3235) Mar 21 12:37:30.998296 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3233) Mar 21 12:37:31.040251 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3233) Mar 21 12:37:31.716615 kubelet[2625]: E0321 12:37:31.716564 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:32.585221 containerd[1520]: time="2025-03-21T12:37:32.585166309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:32.585996 containerd[1520]: time="2025-03-21T12:37:32.585923994Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 21 12:37:32.587175 containerd[1520]: time="2025-03-21T12:37:32.587141376Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:32.588953 containerd[1520]: time="2025-03-21T12:37:32.588922904Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:32.589549 containerd[1520]: time="2025-03-21T12:37:32.589510737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 3.033939252s" Mar 21 12:37:32.589583 containerd[1520]: time="2025-03-21T12:37:32.589548283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 21 12:37:32.590456 containerd[1520]: time="2025-03-21T12:37:32.590424227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 21 12:37:32.597878 containerd[1520]: time="2025-03-21T12:37:32.597825451Z" level=info msg="CreateContainer within sandbox \"7b775f603e78dec2a95a7f64f1cb242c722d33af1d2b1fb8516cf839a85c8a30\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 21 12:37:32.605558 containerd[1520]: time="2025-03-21T12:37:32.605528803Z" level=info msg="Container 9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:32.612543 containerd[1520]: time="2025-03-21T12:37:32.612506235Z" level=info msg="CreateContainer within sandbox \"7b775f603e78dec2a95a7f64f1cb242c722d33af1d2b1fb8516cf839a85c8a30\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933\"" Mar 21 12:37:32.613157 containerd[1520]: time="2025-03-21T12:37:32.612989899Z" level=info msg="StartContainer for \"9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933\"" Mar 21 12:37:32.614034 containerd[1520]: time="2025-03-21T12:37:32.614009371Z" level=info msg="connecting to shim 9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933" address="unix:///run/containerd/s/c3b89a321007cbcba0347df90955d9a0b9bafee3e40fd5fd7669dfe2d0f4b325" protocol=ttrpc version=3 Mar 21 12:37:32.637364 systemd[1]: Started cri-containerd-9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933.scope - libcontainer container 9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933. Mar 21 12:37:32.684327 containerd[1520]: time="2025-03-21T12:37:32.684285350Z" level=info msg="StartContainer for \"9a4204ed940915a5ffe7149ee6a15b2a2d31413b38c4ef718df604dfa63ff933\" returns successfully" Mar 21 12:37:32.775372 kubelet[2625]: I0321 12:37:32.775302 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-fd89bdd5f-zhl9s" podStartSLOduration=1.210799235 podStartE2EDuration="5.774089304s" podCreationTimestamp="2025-03-21 12:37:27 +0000 UTC" firstStartedPulling="2025-03-21 12:37:28.026962662 +0000 UTC m=+13.393307674" lastFinishedPulling="2025-03-21 12:37:32.590252731 +0000 UTC m=+17.956597743" observedRunningTime="2025-03-21 12:37:32.773315735 +0000 UTC m=+18.139660747" watchObservedRunningTime="2025-03-21 12:37:32.774089304 +0000 UTC m=+18.140434316" Mar 21 12:37:33.716605 kubelet[2625]: E0321 12:37:33.716560 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:33.768147 kubelet[2625]: I0321 12:37:33.768109 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:37:35.716283 kubelet[2625]: E0321 12:37:35.716222 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:36.910954 containerd[1520]: time="2025-03-21T12:37:36.910906119Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:36.911748 containerd[1520]: time="2025-03-21T12:37:36.911703034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 21 12:37:36.912918 containerd[1520]: time="2025-03-21T12:37:36.912890356Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:36.917466 containerd[1520]: time="2025-03-21T12:37:36.917399509Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:36.918483 containerd[1520]: time="2025-03-21T12:37:36.918365560Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.327909751s" Mar 21 12:37:36.918483 containerd[1520]: time="2025-03-21T12:37:36.918406632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 21 12:37:36.920660 containerd[1520]: time="2025-03-21T12:37:36.920629315Z" level=info msg="CreateContainer within sandbox \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 21 12:37:36.927929 containerd[1520]: time="2025-03-21T12:37:36.927732448Z" level=info msg="Container 05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:36.939582 containerd[1520]: time="2025-03-21T12:37:36.939549340Z" level=info msg="CreateContainer within sandbox \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\"" Mar 21 12:37:36.940073 containerd[1520]: time="2025-03-21T12:37:36.940046649Z" level=info msg="StartContainer for \"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\"" Mar 21 12:37:36.941423 containerd[1520]: time="2025-03-21T12:37:36.941396305Z" level=info msg="connecting to shim 05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426" address="unix:///run/containerd/s/68d41387a4e8872819428fbd7030da92c595694187b2d1689e3859a0ded7eb38" protocol=ttrpc version=3 Mar 21 12:37:36.964382 systemd[1]: Started cri-containerd-05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426.scope - libcontainer container 05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426. Mar 21 12:37:37.028687 containerd[1520]: time="2025-03-21T12:37:37.028578391Z" level=info msg="StartContainer for \"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\" returns successfully" Mar 21 12:37:37.717091 kubelet[2625]: E0321 12:37:37.717009 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:37.865301 kubelet[2625]: I0321 12:37:37.865246 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:37:38.128988 systemd[1]: cri-containerd-05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426.scope: Deactivated successfully. Mar 21 12:37:38.129363 systemd[1]: cri-containerd-05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426.scope: Consumed 595ms CPU time, 160.5M memory peak, 8K read from disk, 154M written to disk. Mar 21 12:37:38.129889 containerd[1520]: time="2025-03-21T12:37:38.129831850Z" level=info msg="received exit event container_id:\"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\" id:\"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\" pid:3303 exited_at:{seconds:1742560658 nanos:129610421}" Mar 21 12:37:38.130312 containerd[1520]: time="2025-03-21T12:37:38.129967207Z" level=info msg="TaskExit event in podsandbox handler container_id:\"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\" id:\"05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426\" pid:3303 exited_at:{seconds:1742560658 nanos:129610421}" Mar 21 12:37:38.147010 kubelet[2625]: I0321 12:37:38.146976 2625 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Mar 21 12:37:38.153371 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05697a2cadbb11536339525992feaab49cf806dc0305b8f25eb0acd9e338a426-rootfs.mount: Deactivated successfully. Mar 21 12:37:38.192404 systemd[1]: Created slice kubepods-besteffort-pod6036665f_7b1f_4c78_a49e_d8843ac34b11.slice - libcontainer container kubepods-besteffort-pod6036665f_7b1f_4c78_a49e_d8843ac34b11.slice. Mar 21 12:37:38.198968 systemd[1]: Created slice kubepods-burstable-pod875e79d5_9241_4c45_b372_35772b21a67c.slice - libcontainer container kubepods-burstable-pod875e79d5_9241_4c45_b372_35772b21a67c.slice. Mar 21 12:37:38.204504 systemd[1]: Created slice kubepods-besteffort-pod486d9796_33fc_4e2c_8463_31f9da64701f.slice - libcontainer container kubepods-besteffort-pod486d9796_33fc_4e2c_8463_31f9da64701f.slice. Mar 21 12:37:38.209728 systemd[1]: Created slice kubepods-burstable-poda5ee7a9f_7f1a_42aa_a7ee_2f6c15c27d6a.slice - libcontainer container kubepods-burstable-poda5ee7a9f_7f1a_42aa_a7ee_2f6c15c27d6a.slice. Mar 21 12:37:38.210605 kubelet[2625]: I0321 12:37:38.210565 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/486d9796-33fc-4e2c-8463-31f9da64701f-tigera-ca-bundle\") pod \"calico-kube-controllers-5f8966dc87-5ddj5\" (UID: \"486d9796-33fc-4e2c-8463-31f9da64701f\") " pod="calico-system/calico-kube-controllers-5f8966dc87-5ddj5" Mar 21 12:37:38.210734 kubelet[2625]: I0321 12:37:38.210609 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f969ccdc-e0a7-429f-b9ae-ce6d16a47c63-calico-apiserver-certs\") pod \"calico-apiserver-69bfdb59c6-lr9hk\" (UID: \"f969ccdc-e0a7-429f-b9ae-ce6d16a47c63\") " pod="calico-apiserver/calico-apiserver-69bfdb59c6-lr9hk" Mar 21 12:37:38.210734 kubelet[2625]: I0321 12:37:38.210634 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6036665f-7b1f-4c78-a49e-d8843ac34b11-calico-apiserver-certs\") pod \"calico-apiserver-69bfdb59c6-q6fm8\" (UID: \"6036665f-7b1f-4c78-a49e-d8843ac34b11\") " pod="calico-apiserver/calico-apiserver-69bfdb59c6-q6fm8" Mar 21 12:37:38.210734 kubelet[2625]: I0321 12:37:38.210654 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjwdb\" (UniqueName: \"kubernetes.io/projected/a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a-kube-api-access-sjwdb\") pod \"coredns-668d6bf9bc-crrwt\" (UID: \"a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a\") " pod="kube-system/coredns-668d6bf9bc-crrwt" Mar 21 12:37:38.210734 kubelet[2625]: I0321 12:37:38.210677 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7t5p\" (UniqueName: \"kubernetes.io/projected/486d9796-33fc-4e2c-8463-31f9da64701f-kube-api-access-q7t5p\") pod \"calico-kube-controllers-5f8966dc87-5ddj5\" (UID: \"486d9796-33fc-4e2c-8463-31f9da64701f\") " pod="calico-system/calico-kube-controllers-5f8966dc87-5ddj5" Mar 21 12:37:38.210734 kubelet[2625]: I0321 12:37:38.210700 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a-config-volume\") pod \"coredns-668d6bf9bc-crrwt\" (UID: \"a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a\") " pod="kube-system/coredns-668d6bf9bc-crrwt" Mar 21 12:37:38.210909 kubelet[2625]: I0321 12:37:38.210721 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g4z2f\" (UniqueName: \"kubernetes.io/projected/6036665f-7b1f-4c78-a49e-d8843ac34b11-kube-api-access-g4z2f\") pod \"calico-apiserver-69bfdb59c6-q6fm8\" (UID: \"6036665f-7b1f-4c78-a49e-d8843ac34b11\") " pod="calico-apiserver/calico-apiserver-69bfdb59c6-q6fm8" Mar 21 12:37:38.210909 kubelet[2625]: I0321 12:37:38.210741 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/875e79d5-9241-4c45-b372-35772b21a67c-config-volume\") pod \"coredns-668d6bf9bc-8k9rn\" (UID: \"875e79d5-9241-4c45-b372-35772b21a67c\") " pod="kube-system/coredns-668d6bf9bc-8k9rn" Mar 21 12:37:38.210909 kubelet[2625]: I0321 12:37:38.210768 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jllr5\" (UniqueName: \"kubernetes.io/projected/875e79d5-9241-4c45-b372-35772b21a67c-kube-api-access-jllr5\") pod \"coredns-668d6bf9bc-8k9rn\" (UID: \"875e79d5-9241-4c45-b372-35772b21a67c\") " pod="kube-system/coredns-668d6bf9bc-8k9rn" Mar 21 12:37:38.210909 kubelet[2625]: I0321 12:37:38.210788 2625 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wp6sq\" (UniqueName: \"kubernetes.io/projected/f969ccdc-e0a7-429f-b9ae-ce6d16a47c63-kube-api-access-wp6sq\") pod \"calico-apiserver-69bfdb59c6-lr9hk\" (UID: \"f969ccdc-e0a7-429f-b9ae-ce6d16a47c63\") " pod="calico-apiserver/calico-apiserver-69bfdb59c6-lr9hk" Mar 21 12:37:38.214129 systemd[1]: Created slice kubepods-besteffort-podf969ccdc_e0a7_429f_b9ae_ce6d16a47c63.slice - libcontainer container kubepods-besteffort-podf969ccdc_e0a7_429f_b9ae_ce6d16a47c63.slice. Mar 21 12:37:38.497409 containerd[1520]: time="2025-03-21T12:37:38.497350363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-q6fm8,Uid:6036665f-7b1f-4c78-a49e-d8843ac34b11,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:37:38.503107 containerd[1520]: time="2025-03-21T12:37:38.503061700Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8k9rn,Uid:875e79d5-9241-4c45-b372-35772b21a67c,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:38.506982 containerd[1520]: time="2025-03-21T12:37:38.506923906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8966dc87-5ddj5,Uid:486d9796-33fc-4e2c-8463-31f9da64701f,Namespace:calico-system,Attempt:0,}" Mar 21 12:37:38.513131 containerd[1520]: time="2025-03-21T12:37:38.513104733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crrwt,Uid:a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:38.519180 containerd[1520]: time="2025-03-21T12:37:38.519119552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-lr9hk,Uid:f969ccdc-e0a7-429f-b9ae-ce6d16a47c63,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:37:38.599697 containerd[1520]: time="2025-03-21T12:37:38.598371044Z" level=error msg="Failed to destroy network for sandbox \"36466ed668e9bcb11b84a79108eaa6e43568940078df1589792eff412c422fa4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.599697 containerd[1520]: time="2025-03-21T12:37:38.599575478Z" level=error msg="Failed to destroy network for sandbox \"f1eea000ce225683aca7dc728a10ea0c790c652e12b2430f1b4d8049dc03b4f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.599975 containerd[1520]: time="2025-03-21T12:37:38.599924529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-q6fm8,Uid:6036665f-7b1f-4c78-a49e-d8843ac34b11,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36466ed668e9bcb11b84a79108eaa6e43568940078df1589792eff412c422fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.600390 kubelet[2625]: E0321 12:37:38.600329 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36466ed668e9bcb11b84a79108eaa6e43568940078df1589792eff412c422fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.600467 kubelet[2625]: E0321 12:37:38.600417 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36466ed668e9bcb11b84a79108eaa6e43568940078df1589792eff412c422fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bfdb59c6-q6fm8" Mar 21 12:37:38.600467 kubelet[2625]: E0321 12:37:38.600454 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36466ed668e9bcb11b84a79108eaa6e43568940078df1589792eff412c422fa4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bfdb59c6-q6fm8" Mar 21 12:37:38.601339 kubelet[2625]: E0321 12:37:38.600499 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69bfdb59c6-q6fm8_calico-apiserver(6036665f-7b1f-4c78-a49e-d8843ac34b11)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69bfdb59c6-q6fm8_calico-apiserver(6036665f-7b1f-4c78-a49e-d8843ac34b11)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36466ed668e9bcb11b84a79108eaa6e43568940078df1589792eff412c422fa4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bfdb59c6-q6fm8" podUID="6036665f-7b1f-4c78-a49e-d8843ac34b11" Mar 21 12:37:38.602836 containerd[1520]: time="2025-03-21T12:37:38.602762037Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8k9rn,Uid:875e79d5-9241-4c45-b372-35772b21a67c,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1eea000ce225683aca7dc728a10ea0c790c652e12b2430f1b4d8049dc03b4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.602995 kubelet[2625]: E0321 12:37:38.602964 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1eea000ce225683aca7dc728a10ea0c790c652e12b2430f1b4d8049dc03b4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.603034 kubelet[2625]: E0321 12:37:38.602998 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1eea000ce225683aca7dc728a10ea0c790c652e12b2430f1b4d8049dc03b4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8k9rn" Mar 21 12:37:38.603034 kubelet[2625]: E0321 12:37:38.603016 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1eea000ce225683aca7dc728a10ea0c790c652e12b2430f1b4d8049dc03b4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-8k9rn" Mar 21 12:37:38.603089 kubelet[2625]: E0321 12:37:38.603050 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-8k9rn_kube-system(875e79d5-9241-4c45-b372-35772b21a67c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-8k9rn_kube-system(875e79d5-9241-4c45-b372-35772b21a67c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1eea000ce225683aca7dc728a10ea0c790c652e12b2430f1b4d8049dc03b4f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-8k9rn" podUID="875e79d5-9241-4c45-b372-35772b21a67c" Mar 21 12:37:38.604994 containerd[1520]: time="2025-03-21T12:37:38.604933165Z" level=error msg="Failed to destroy network for sandbox \"4cc59be848a09d91364b4b3d5d11fbe2689ca21f7dee8aaa634a08e613f3f49b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.606521 containerd[1520]: time="2025-03-21T12:37:38.606418306Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crrwt,Uid:a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc59be848a09d91364b4b3d5d11fbe2689ca21f7dee8aaa634a08e613f3f49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.606910 containerd[1520]: time="2025-03-21T12:37:38.606792356Z" level=error msg="Failed to destroy network for sandbox \"028137114e4eca1d9e6db233cd555a876460a24dda240584ab85233ace3bdb02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.607172 kubelet[2625]: E0321 12:37:38.607135 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc59be848a09d91364b4b3d5d11fbe2689ca21f7dee8aaa634a08e613f3f49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.607269 kubelet[2625]: E0321 12:37:38.607182 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc59be848a09d91364b4b3d5d11fbe2689ca21f7dee8aaa634a08e613f3f49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-crrwt" Mar 21 12:37:38.607269 kubelet[2625]: E0321 12:37:38.607202 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4cc59be848a09d91364b4b3d5d11fbe2689ca21f7dee8aaa634a08e613f3f49b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-crrwt" Mar 21 12:37:38.607376 kubelet[2625]: E0321 12:37:38.607266 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-crrwt_kube-system(a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-crrwt_kube-system(a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4cc59be848a09d91364b4b3d5d11fbe2689ca21f7dee8aaa634a08e613f3f49b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-crrwt" podUID="a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a" Mar 21 12:37:38.608400 containerd[1520]: time="2025-03-21T12:37:38.608355191Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8966dc87-5ddj5,Uid:486d9796-33fc-4e2c-8463-31f9da64701f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"028137114e4eca1d9e6db233cd555a876460a24dda240584ab85233ace3bdb02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.608600 kubelet[2625]: E0321 12:37:38.608576 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028137114e4eca1d9e6db233cd555a876460a24dda240584ab85233ace3bdb02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.608647 kubelet[2625]: E0321 12:37:38.608606 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028137114e4eca1d9e6db233cd555a876460a24dda240584ab85233ace3bdb02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f8966dc87-5ddj5" Mar 21 12:37:38.608647 kubelet[2625]: E0321 12:37:38.608623 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"028137114e4eca1d9e6db233cd555a876460a24dda240584ab85233ace3bdb02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5f8966dc87-5ddj5" Mar 21 12:37:38.608699 kubelet[2625]: E0321 12:37:38.608653 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5f8966dc87-5ddj5_calico-system(486d9796-33fc-4e2c-8463-31f9da64701f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5f8966dc87-5ddj5_calico-system(486d9796-33fc-4e2c-8463-31f9da64701f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"028137114e4eca1d9e6db233cd555a876460a24dda240584ab85233ace3bdb02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5f8966dc87-5ddj5" podUID="486d9796-33fc-4e2c-8463-31f9da64701f" Mar 21 12:37:38.609168 containerd[1520]: time="2025-03-21T12:37:38.609055307Z" level=error msg="Failed to destroy network for sandbox \"3b9fff5dace9c98304da32ed9bff38ead46cc85f5ef973c3254f80dc09bf5d24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.610416 containerd[1520]: time="2025-03-21T12:37:38.610392464Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-lr9hk,Uid:f969ccdc-e0a7-429f-b9ae-ce6d16a47c63,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9fff5dace9c98304da32ed9bff38ead46cc85f5ef973c3254f80dc09bf5d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.610695 kubelet[2625]: E0321 12:37:38.610657 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9fff5dace9c98304da32ed9bff38ead46cc85f5ef973c3254f80dc09bf5d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:38.610747 kubelet[2625]: E0321 12:37:38.610720 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9fff5dace9c98304da32ed9bff38ead46cc85f5ef973c3254f80dc09bf5d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bfdb59c6-lr9hk" Mar 21 12:37:38.610778 kubelet[2625]: E0321 12:37:38.610743 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3b9fff5dace9c98304da32ed9bff38ead46cc85f5ef973c3254f80dc09bf5d24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-69bfdb59c6-lr9hk" Mar 21 12:37:38.610806 kubelet[2625]: E0321 12:37:38.610785 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-69bfdb59c6-lr9hk_calico-apiserver(f969ccdc-e0a7-429f-b9ae-ce6d16a47c63)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-69bfdb59c6-lr9hk_calico-apiserver(f969ccdc-e0a7-429f-b9ae-ce6d16a47c63)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3b9fff5dace9c98304da32ed9bff38ead46cc85f5ef973c3254f80dc09bf5d24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-69bfdb59c6-lr9hk" podUID="f969ccdc-e0a7-429f-b9ae-ce6d16a47c63" Mar 21 12:37:38.916466 containerd[1520]: time="2025-03-21T12:37:38.916425819Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 21 12:37:39.721994 systemd[1]: Created slice kubepods-besteffort-podfd2a490d_bd62_49bd_ba85_983f4d907bf7.slice - libcontainer container kubepods-besteffort-podfd2a490d_bd62_49bd_ba85_983f4d907bf7.slice. Mar 21 12:37:39.724354 containerd[1520]: time="2025-03-21T12:37:39.724319527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m44lj,Uid:fd2a490d-bd62-49bd-ba85-983f4d907bf7,Namespace:calico-system,Attempt:0,}" Mar 21 12:37:39.776867 containerd[1520]: time="2025-03-21T12:37:39.776823289Z" level=error msg="Failed to destroy network for sandbox \"cbcdf1e2ca2525ff0591492bffcbb53ccc4915cfa6ac7a31efb6208d56fe34ee\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:39.778183 containerd[1520]: time="2025-03-21T12:37:39.778148917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m44lj,Uid:fd2a490d-bd62-49bd-ba85-983f4d907bf7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbcdf1e2ca2525ff0591492bffcbb53ccc4915cfa6ac7a31efb6208d56fe34ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:39.778537 kubelet[2625]: E0321 12:37:39.778492 2625 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbcdf1e2ca2525ff0591492bffcbb53ccc4915cfa6ac7a31efb6208d56fe34ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 21 12:37:39.778888 kubelet[2625]: E0321 12:37:39.778563 2625 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbcdf1e2ca2525ff0591492bffcbb53ccc4915cfa6ac7a31efb6208d56fe34ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:39.778888 kubelet[2625]: E0321 12:37:39.778586 2625 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbcdf1e2ca2525ff0591492bffcbb53ccc4915cfa6ac7a31efb6208d56fe34ee\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-m44lj" Mar 21 12:37:39.778888 kubelet[2625]: E0321 12:37:39.778634 2625 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-m44lj_calico-system(fd2a490d-bd62-49bd-ba85-983f4d907bf7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-m44lj_calico-system(fd2a490d-bd62-49bd-ba85-983f4d907bf7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbcdf1e2ca2525ff0591492bffcbb53ccc4915cfa6ac7a31efb6208d56fe34ee\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-m44lj" podUID="fd2a490d-bd62-49bd-ba85-983f4d907bf7" Mar 21 12:37:39.779125 systemd[1]: run-netns-cni\x2d4dd02d93\x2d30ad\x2d8111\x2daefe\x2dd093619f07f2.mount: Deactivated successfully. Mar 21 12:37:44.452782 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2714450519.mount: Deactivated successfully. Mar 21 12:37:44.645819 systemd[1]: Started sshd@7-10.0.0.113:22-10.0.0.1:33498.service - OpenSSH per-connection server daemon (10.0.0.1:33498). Mar 21 12:37:45.302887 containerd[1520]: time="2025-03-21T12:37:45.302833468Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:45.303879 containerd[1520]: time="2025-03-21T12:37:45.303843923Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 21 12:37:45.305091 containerd[1520]: time="2025-03-21T12:37:45.305039588Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:45.307081 containerd[1520]: time="2025-03-21T12:37:45.307051068Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:45.307711 containerd[1520]: time="2025-03-21T12:37:45.307667032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 6.391207566s" Mar 21 12:37:45.307711 containerd[1520]: time="2025-03-21T12:37:45.307705247Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 21 12:37:45.317319 containerd[1520]: time="2025-03-21T12:37:45.317166651Z" level=info msg="CreateContainer within sandbox \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 21 12:37:45.330823 sshd[3568]: Accepted publickey for core from 10.0.0.1 port 33498 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:37:45.332522 sshd-session[3568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:45.336787 systemd-logind[1503]: New session 8 of user core. Mar 21 12:37:45.346376 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 21 12:37:45.348482 containerd[1520]: time="2025-03-21T12:37:45.348446312Z" level=info msg="Container cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:45.365370 containerd[1520]: time="2025-03-21T12:37:45.365317113Z" level=info msg="CreateContainer within sandbox \"c8cacf29b2ef0b03429a4dac4c823b358eb0cc17870b1458309e50b361c66dc4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365\"" Mar 21 12:37:45.365854 containerd[1520]: time="2025-03-21T12:37:45.365820907Z" level=info msg="StartContainer for \"cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365\"" Mar 21 12:37:45.367196 containerd[1520]: time="2025-03-21T12:37:45.367159181Z" level=info msg="connecting to shim cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365" address="unix:///run/containerd/s/68d41387a4e8872819428fbd7030da92c595694187b2d1689e3859a0ded7eb38" protocol=ttrpc version=3 Mar 21 12:37:45.400376 systemd[1]: Started cri-containerd-cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365.scope - libcontainer container cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365. Mar 21 12:37:45.454631 containerd[1520]: time="2025-03-21T12:37:45.454522583Z" level=info msg="StartContainer for \"cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365\" returns successfully" Mar 21 12:37:45.493249 sshd[3573]: Connection closed by 10.0.0.1 port 33498 Mar 21 12:37:45.494099 sshd-session[3568]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:45.498656 systemd[1]: sshd@7-10.0.0.113:22-10.0.0.1:33498.service: Deactivated successfully. Mar 21 12:37:45.500846 systemd[1]: session-8.scope: Deactivated successfully. Mar 21 12:37:45.501623 systemd-logind[1503]: Session 8 logged out. Waiting for processes to exit. Mar 21 12:37:45.502858 systemd-logind[1503]: Removed session 8. Mar 21 12:37:45.521074 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 21 12:37:45.521217 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 21 12:37:45.948752 kubelet[2625]: I0321 12:37:45.948690 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4cz5c" podStartSLOduration=1.662597246 podStartE2EDuration="18.948674629s" podCreationTimestamp="2025-03-21 12:37:27 +0000 UTC" firstStartedPulling="2025-03-21 12:37:28.022277803 +0000 UTC m=+13.388622805" lastFinishedPulling="2025-03-21 12:37:45.308355186 +0000 UTC m=+30.674700188" observedRunningTime="2025-03-21 12:37:45.948356136 +0000 UTC m=+31.314701148" watchObservedRunningTime="2025-03-21 12:37:45.948674629 +0000 UTC m=+31.315019641" Mar 21 12:37:46.937119 kubelet[2625]: I0321 12:37:46.937065 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:37:46.991389 kernel: bpftool[3778]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 21 12:37:47.222916 systemd-networkd[1424]: vxlan.calico: Link UP Mar 21 12:37:47.222926 systemd-networkd[1424]: vxlan.calico: Gained carrier Mar 21 12:37:47.915573 containerd[1520]: time="2025-03-21T12:37:47.915526000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365\" id:\"0fb8ce9527badcf2e60fe1b281b9b7f8674b2a0a1b2aad085476e397380bb89c\" pid:3863 exit_status:1 exited_at:{seconds:1742560667 nanos:915155559}" Mar 21 12:37:47.938819 kubelet[2625]: I0321 12:37:47.938781 2625 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 21 12:37:48.012148 containerd[1520]: time="2025-03-21T12:37:48.012093142Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365\" id:\"de051606a3beef9f234c02c003848efb7255610c00c45af940eaa6e12405843c\" pid:3887 exit_status:1 exited_at:{seconds:1742560668 nanos:11805883}" Mar 21 12:37:48.316383 systemd-networkd[1424]: vxlan.calico: Gained IPv6LL Mar 21 12:37:49.717701 containerd[1520]: time="2025-03-21T12:37:49.717653439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-lr9hk,Uid:f969ccdc-e0a7-429f-b9ae-ce6d16a47c63,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:37:49.718100 containerd[1520]: time="2025-03-21T12:37:49.717884949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8k9rn,Uid:875e79d5-9241-4c45-b372-35772b21a67c,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:50.131549 systemd-networkd[1424]: cali41efcc8eefe: Link UP Mar 21 12:37:50.131886 systemd-networkd[1424]: cali41efcc8eefe: Gained carrier Mar 21 12:37:50.143350 containerd[1520]: 2025-03-21 12:37:50.041 [INFO][3907] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0 coredns-668d6bf9bc- kube-system 875e79d5-9241-4c45-b372-35772b21a67c 707 0 2025-03-21 12:37:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-8k9rn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali41efcc8eefe [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-" Mar 21 12:37:50.143350 containerd[1520]: 2025-03-21 12:37:50.042 [INFO][3907] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.143350 containerd[1520]: 2025-03-21 12:37:50.094 [INFO][3930] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" HandleID="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Workload="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.103 [INFO][3930] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" HandleID="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Workload="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002e5670), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-8k9rn", "timestamp":"2025-03-21 12:37:50.094806795 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.103 [INFO][3930] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.103 [INFO][3930] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.104 [INFO][3930] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.106 [INFO][3930] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" host="localhost" Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.109 [INFO][3930] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.113 [INFO][3930] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.114 [INFO][3930] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.116 [INFO][3930] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:50.143537 containerd[1520]: 2025-03-21 12:37:50.116 [INFO][3930] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" host="localhost" Mar 21 12:37:50.144616 containerd[1520]: 2025-03-21 12:37:50.117 [INFO][3930] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c Mar 21 12:37:50.144616 containerd[1520]: 2025-03-21 12:37:50.121 [INFO][3930] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" host="localhost" Mar 21 12:37:50.144616 containerd[1520]: 2025-03-21 12:37:50.125 [INFO][3930] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" host="localhost" Mar 21 12:37:50.144616 containerd[1520]: 2025-03-21 12:37:50.125 [INFO][3930] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" host="localhost" Mar 21 12:37:50.144616 containerd[1520]: 2025-03-21 12:37:50.125 [INFO][3930] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:37:50.144616 containerd[1520]: 2025-03-21 12:37:50.125 [INFO][3930] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" HandleID="k8s-pod-network.5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Workload="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.144801 containerd[1520]: 2025-03-21 12:37:50.128 [INFO][3907] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"875e79d5-9241-4c45-b372-35772b21a67c", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-8k9rn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41efcc8eefe", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:50.144883 containerd[1520]: 2025-03-21 12:37:50.128 [INFO][3907] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.144883 containerd[1520]: 2025-03-21 12:37:50.128 [INFO][3907] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali41efcc8eefe ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.144883 containerd[1520]: 2025-03-21 12:37:50.131 [INFO][3907] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.144981 containerd[1520]: 2025-03-21 12:37:50.132 [INFO][3907] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"875e79d5-9241-4c45-b372-35772b21a67c", ResourceVersion:"707", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c", Pod:"coredns-668d6bf9bc-8k9rn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali41efcc8eefe", MAC:"be:d0:f3:42:d0:da", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:50.144981 containerd[1520]: 2025-03-21 12:37:50.139 [INFO][3907] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" Namespace="kube-system" Pod="coredns-668d6bf9bc-8k9rn" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--8k9rn-eth0" Mar 21 12:37:50.240439 systemd-networkd[1424]: calia36cc069796: Link UP Mar 21 12:37:50.241140 systemd-networkd[1424]: calia36cc069796: Gained carrier Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.041 [INFO][3901] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0 calico-apiserver-69bfdb59c6- calico-apiserver f969ccdc-e0a7-429f-b9ae-ce6d16a47c63 708 0 2025-03-21 12:37:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69bfdb59c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69bfdb59c6-lr9hk eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calia36cc069796 [] []}} ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.042 [INFO][3901] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.094 [INFO][3932] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" HandleID="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Workload="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.104 [INFO][3932] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" HandleID="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Workload="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319f40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69bfdb59c6-lr9hk", "timestamp":"2025-03-21 12:37:50.094801535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.104 [INFO][3932] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.125 [INFO][3932] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.125 [INFO][3932] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.206 [INFO][3932] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.210 [INFO][3932] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.216 [INFO][3932] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.218 [INFO][3932] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.220 [INFO][3932] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.220 [INFO][3932] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.222 [INFO][3932] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95 Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.226 [INFO][3932] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.233 [INFO][3932] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.233 [INFO][3932] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" host="localhost" Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.233 [INFO][3932] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:37:50.254959 containerd[1520]: 2025-03-21 12:37:50.233 [INFO][3932] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" HandleID="k8s-pod-network.d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Workload="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.255922 containerd[1520]: 2025-03-21 12:37:50.237 [INFO][3901] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0", GenerateName:"calico-apiserver-69bfdb59c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f969ccdc-e0a7-429f-b9ae-ce6d16a47c63", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bfdb59c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69bfdb59c6-lr9hk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia36cc069796", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:50.255922 containerd[1520]: 2025-03-21 12:37:50.238 [INFO][3901] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.255922 containerd[1520]: 2025-03-21 12:37:50.238 [INFO][3901] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia36cc069796 ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.255922 containerd[1520]: 2025-03-21 12:37:50.241 [INFO][3901] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.255922 containerd[1520]: 2025-03-21 12:37:50.241 [INFO][3901] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0", GenerateName:"calico-apiserver-69bfdb59c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"f969ccdc-e0a7-429f-b9ae-ce6d16a47c63", ResourceVersion:"708", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bfdb59c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95", Pod:"calico-apiserver-69bfdb59c6-lr9hk", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calia36cc069796", MAC:"fa:17:a4:c1:c9:d0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:50.255922 containerd[1520]: 2025-03-21 12:37:50.251 [INFO][3901] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-lr9hk" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--lr9hk-eth0" Mar 21 12:37:50.271515 containerd[1520]: time="2025-03-21T12:37:50.271463232Z" level=info msg="connecting to shim 5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c" address="unix:///run/containerd/s/79e7328433c536fe9612d45f26e3c6ae6016e126227dfc9650059ae8dba301b1" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:50.285183 containerd[1520]: time="2025-03-21T12:37:50.285019666Z" level=info msg="connecting to shim d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95" address="unix:///run/containerd/s/12a3cfe4e2f437d8ac86c7c948b720f839162ac2b08fc338be1d669103281161" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:50.302383 systemd[1]: Started cri-containerd-5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c.scope - libcontainer container 5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c. Mar 21 12:37:50.305556 systemd[1]: Started cri-containerd-d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95.scope - libcontainer container d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95. Mar 21 12:37:50.315318 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:37:50.318303 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:37:50.347306 containerd[1520]: time="2025-03-21T12:37:50.347219362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8k9rn,Uid:875e79d5-9241-4c45-b372-35772b21a67c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c\"" Mar 21 12:37:50.350425 containerd[1520]: time="2025-03-21T12:37:50.350364277Z" level=info msg="CreateContainer within sandbox \"5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:37:50.351796 containerd[1520]: time="2025-03-21T12:37:50.351764304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-lr9hk,Uid:f969ccdc-e0a7-429f-b9ae-ce6d16a47c63,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95\"" Mar 21 12:37:50.353126 containerd[1520]: time="2025-03-21T12:37:50.353103594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 21 12:37:50.364193 containerd[1520]: time="2025-03-21T12:37:50.364156699Z" level=info msg="Container ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:50.370218 containerd[1520]: time="2025-03-21T12:37:50.370186523Z" level=info msg="CreateContainer within sandbox \"5739454253f4422f1d4a12d91db110b0a3dc3d6ce4954eee7aa93eb06ccc294c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb\"" Mar 21 12:37:50.370713 containerd[1520]: time="2025-03-21T12:37:50.370679711Z" level=info msg="StartContainer for \"ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb\"" Mar 21 12:37:50.371551 containerd[1520]: time="2025-03-21T12:37:50.371517367Z" level=info msg="connecting to shim ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb" address="unix:///run/containerd/s/79e7328433c536fe9612d45f26e3c6ae6016e126227dfc9650059ae8dba301b1" protocol=ttrpc version=3 Mar 21 12:37:50.394352 systemd[1]: Started cri-containerd-ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb.scope - libcontainer container ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb. Mar 21 12:37:50.424542 containerd[1520]: time="2025-03-21T12:37:50.424501563Z" level=info msg="StartContainer for \"ac47be56e47e3a43640396988ec7e79aa3cbd1df5d4b400ac2e4552b2381e9fb\" returns successfully" Mar 21 12:37:50.510657 systemd[1]: Started sshd@8-10.0.0.113:22-10.0.0.1:33506.service - OpenSSH per-connection server daemon (10.0.0.1:33506). Mar 21 12:37:50.569653 sshd[4103]: Accepted publickey for core from 10.0.0.1 port 33506 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:37:50.571175 sshd-session[4103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:50.575334 systemd-logind[1503]: New session 9 of user core. Mar 21 12:37:50.584365 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 21 12:37:50.709255 sshd[4109]: Connection closed by 10.0.0.1 port 33506 Mar 21 12:37:50.708256 sshd-session[4103]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:50.713345 systemd[1]: sshd@8-10.0.0.113:22-10.0.0.1:33506.service: Deactivated successfully. Mar 21 12:37:50.715553 systemd[1]: session-9.scope: Deactivated successfully. Mar 21 12:37:50.716205 systemd-logind[1503]: Session 9 logged out. Waiting for processes to exit. Mar 21 12:37:50.717535 systemd-logind[1503]: Removed session 9. Mar 21 12:37:50.990579 kubelet[2625]: I0321 12:37:50.990445 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8k9rn" podStartSLOduration=29.990426067 podStartE2EDuration="29.990426067s" podCreationTimestamp="2025-03-21 12:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:50.987311131 +0000 UTC m=+36.353656143" watchObservedRunningTime="2025-03-21 12:37:50.990426067 +0000 UTC m=+36.356771079" Mar 21 12:37:51.452389 systemd-networkd[1424]: cali41efcc8eefe: Gained IPv6LL Mar 21 12:37:51.964411 systemd-networkd[1424]: calia36cc069796: Gained IPv6LL Mar 21 12:37:52.717093 containerd[1520]: time="2025-03-21T12:37:52.717033003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8966dc87-5ddj5,Uid:486d9796-33fc-4e2c-8463-31f9da64701f,Namespace:calico-system,Attempt:0,}" Mar 21 12:37:52.820584 systemd-networkd[1424]: cali5f2c7e059b5: Link UP Mar 21 12:37:52.820782 systemd-networkd[1424]: cali5f2c7e059b5: Gained carrier Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.761 [INFO][4135] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0 calico-kube-controllers-5f8966dc87- calico-system 486d9796-33fc-4e2c-8463-31f9da64701f 704 0 2025-03-21 12:37:27 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5f8966dc87 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5f8966dc87-5ddj5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5f2c7e059b5 [] []}} ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.761 [INFO][4135] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.785 [INFO][4153] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" HandleID="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Workload="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.794 [INFO][4153] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" HandleID="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Workload="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027ef10), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5f8966dc87-5ddj5", "timestamp":"2025-03-21 12:37:52.785947276 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.794 [INFO][4153] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.794 [INFO][4153] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.794 [INFO][4153] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.796 [INFO][4153] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.799 [INFO][4153] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.802 [INFO][4153] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.804 [INFO][4153] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.806 [INFO][4153] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.806 [INFO][4153] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.807 [INFO][4153] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.810 [INFO][4153] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.814 [INFO][4153] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.814 [INFO][4153] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" host="localhost" Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.815 [INFO][4153] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:37:52.834086 containerd[1520]: 2025-03-21 12:37:52.815 [INFO][4153] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" HandleID="k8s-pod-network.66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Workload="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.834729 containerd[1520]: 2025-03-21 12:37:52.817 [INFO][4135] cni-plugin/k8s.go 386: Populated endpoint ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0", GenerateName:"calico-kube-controllers-5f8966dc87-", Namespace:"calico-system", SelfLink:"", UID:"486d9796-33fc-4e2c-8463-31f9da64701f", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f8966dc87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5f8966dc87-5ddj5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f2c7e059b5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:52.834729 containerd[1520]: 2025-03-21 12:37:52.817 [INFO][4135] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.834729 containerd[1520]: 2025-03-21 12:37:52.817 [INFO][4135] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5f2c7e059b5 ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.834729 containerd[1520]: 2025-03-21 12:37:52.820 [INFO][4135] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.834729 containerd[1520]: 2025-03-21 12:37:52.820 [INFO][4135] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0", GenerateName:"calico-kube-controllers-5f8966dc87-", Namespace:"calico-system", SelfLink:"", UID:"486d9796-33fc-4e2c-8463-31f9da64701f", ResourceVersion:"704", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5f8966dc87", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef", Pod:"calico-kube-controllers-5f8966dc87-5ddj5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5f2c7e059b5", MAC:"8e:9a:d9:a8:bb:3d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:52.834729 containerd[1520]: 2025-03-21 12:37:52.829 [INFO][4135] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" Namespace="calico-system" Pod="calico-kube-controllers-5f8966dc87-5ddj5" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5f8966dc87--5ddj5-eth0" Mar 21 12:37:52.870430 containerd[1520]: time="2025-03-21T12:37:52.870384268Z" level=info msg="connecting to shim 66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef" address="unix:///run/containerd/s/32685d29e298560b8bdd027d028a68fe915e740a86d000c22ee34cade4605bb1" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:52.895395 systemd[1]: Started cri-containerd-66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef.scope - libcontainer container 66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef. Mar 21 12:37:52.910892 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:37:52.940399 containerd[1520]: time="2025-03-21T12:37:52.940367401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5f8966dc87-5ddj5,Uid:486d9796-33fc-4e2c-8463-31f9da64701f,Namespace:calico-system,Attempt:0,} returns sandbox id \"66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef\"" Mar 21 12:37:53.270599 containerd[1520]: time="2025-03-21T12:37:53.270548078Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:53.271241 containerd[1520]: time="2025-03-21T12:37:53.271158880Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 21 12:37:53.272164 containerd[1520]: time="2025-03-21T12:37:53.272130741Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:53.274273 containerd[1520]: time="2025-03-21T12:37:53.274240665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:53.274755 containerd[1520]: time="2025-03-21T12:37:53.274710835Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 2.921575789s" Mar 21 12:37:53.274755 containerd[1520]: time="2025-03-21T12:37:53.274752455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 21 12:37:53.275694 containerd[1520]: time="2025-03-21T12:37:53.275653017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\"" Mar 21 12:37:53.276769 containerd[1520]: time="2025-03-21T12:37:53.276745793Z" level=info msg="CreateContainer within sandbox \"d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 21 12:37:53.284278 containerd[1520]: time="2025-03-21T12:37:53.284241920Z" level=info msg="Container f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:53.290975 containerd[1520]: time="2025-03-21T12:37:53.290940223Z" level=info msg="CreateContainer within sandbox \"d1835957fcc07a03b2f6d7f8dcbc7873073c43703fb29a9b9cafecf693d3de95\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7\"" Mar 21 12:37:53.291457 containerd[1520]: time="2025-03-21T12:37:53.291423979Z" level=info msg="StartContainer for \"f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7\"" Mar 21 12:37:53.292466 containerd[1520]: time="2025-03-21T12:37:53.292443892Z" level=info msg="connecting to shim f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7" address="unix:///run/containerd/s/12a3cfe4e2f437d8ac86c7c948b720f839162ac2b08fc338be1d669103281161" protocol=ttrpc version=3 Mar 21 12:37:53.313380 systemd[1]: Started cri-containerd-f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7.scope - libcontainer container f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7. Mar 21 12:37:53.413522 containerd[1520]: time="2025-03-21T12:37:53.413473387Z" level=info msg="StartContainer for \"f06ab608a4ebf0e9448e11fa655d02ba7168fee731cc9bd727a64925206aa2e7\" returns successfully" Mar 21 12:37:53.717120 containerd[1520]: time="2025-03-21T12:37:53.717057243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crrwt,Uid:a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a,Namespace:kube-system,Attempt:0,}" Mar 21 12:37:53.717624 containerd[1520]: time="2025-03-21T12:37:53.717221942Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-q6fm8,Uid:6036665f-7b1f-4c78-a49e-d8843ac34b11,Namespace:calico-apiserver,Attempt:0,}" Mar 21 12:37:53.924093 systemd-networkd[1424]: cali263f53ff187: Link UP Mar 21 12:37:53.924312 systemd-networkd[1424]: cali263f53ff187: Gained carrier Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.760 [INFO][4259] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--crrwt-eth0 coredns-668d6bf9bc- kube-system a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a 706 0 2025-03-21 12:37:21 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-crrwt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali263f53ff187 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.760 [INFO][4259] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.789 [INFO][4285] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" HandleID="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Workload="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.802 [INFO][4285] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" HandleID="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Workload="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000593880), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-crrwt", "timestamp":"2025-03-21 12:37:53.789153308 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.802 [INFO][4285] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.802 [INFO][4285] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.802 [INFO][4285] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.804 [INFO][4285] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.897 [INFO][4285] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.903 [INFO][4285] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.905 [INFO][4285] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.907 [INFO][4285] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.907 [INFO][4285] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.908 [INFO][4285] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71 Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.913 [INFO][4285] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.917 [INFO][4285] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.917 [INFO][4285] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" host="localhost" Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.917 [INFO][4285] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:37:53.940033 containerd[1520]: 2025-03-21 12:37:53.917 [INFO][4285] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" HandleID="k8s-pod-network.9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Workload="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:53.940734 containerd[1520]: 2025-03-21 12:37:53.920 [INFO][4259] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--crrwt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-crrwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali263f53ff187", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:53.940734 containerd[1520]: 2025-03-21 12:37:53.921 [INFO][4259] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:53.940734 containerd[1520]: 2025-03-21 12:37:53.921 [INFO][4259] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali263f53ff187 ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:53.940734 containerd[1520]: 2025-03-21 12:37:53.923 [INFO][4259] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:53.940734 containerd[1520]: 2025-03-21 12:37:53.924 [INFO][4259] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--crrwt-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 21, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71", Pod:"coredns-668d6bf9bc-crrwt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali263f53ff187", MAC:"96:b7:6c:59:22:73", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:53.940734 containerd[1520]: 2025-03-21 12:37:53.934 [INFO][4259] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" Namespace="kube-system" Pod="coredns-668d6bf9bc-crrwt" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--crrwt-eth0" Mar 21 12:37:54.094331 systemd-networkd[1424]: cali5d6730b121c: Link UP Mar 21 12:37:54.094646 systemd-networkd[1424]: cali5d6730b121c: Gained carrier Mar 21 12:37:54.103475 kubelet[2625]: I0321 12:37:54.102611 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69bfdb59c6-lr9hk" podStartSLOduration=24.179877175 podStartE2EDuration="27.102594536s" podCreationTimestamp="2025-03-21 12:37:27 +0000 UTC" firstStartedPulling="2025-03-21 12:37:50.35276151 +0000 UTC m=+35.719106522" lastFinishedPulling="2025-03-21 12:37:53.275478871 +0000 UTC m=+38.641823883" observedRunningTime="2025-03-21 12:37:54.001108056 +0000 UTC m=+39.367453078" watchObservedRunningTime="2025-03-21 12:37:54.102594536 +0000 UTC m=+39.468939548" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.763 [INFO][4270] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0 calico-apiserver-69bfdb59c6- calico-apiserver 6036665f-7b1f-4c78-a49e-d8843ac34b11 701 0 2025-03-21 12:37:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:69bfdb59c6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-69bfdb59c6-q6fm8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5d6730b121c [] []}} ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.763 [INFO][4270] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.797 [INFO][4291] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" HandleID="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Workload="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.899 [INFO][4291] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" HandleID="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Workload="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003e2dc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-69bfdb59c6-q6fm8", "timestamp":"2025-03-21 12:37:53.797529198 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.899 [INFO][4291] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.918 [INFO][4291] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.918 [INFO][4291] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.920 [INFO][4291] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:53.998 [INFO][4291] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.067 [INFO][4291] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.070 [INFO][4291] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.073 [INFO][4291] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.073 [INFO][4291] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.075 [INFO][4291] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.079 [INFO][4291] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.087 [INFO][4291] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.087 [INFO][4291] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" host="localhost" Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.087 [INFO][4291] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:37:54.106105 containerd[1520]: 2025-03-21 12:37:54.087 [INFO][4291] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" HandleID="k8s-pod-network.3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Workload="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.106774 containerd[1520]: 2025-03-21 12:37:54.090 [INFO][4270] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0", GenerateName:"calico-apiserver-69bfdb59c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6036665f-7b1f-4c78-a49e-d8843ac34b11", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bfdb59c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-69bfdb59c6-q6fm8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d6730b121c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:54.106774 containerd[1520]: 2025-03-21 12:37:54.091 [INFO][4270] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.106774 containerd[1520]: 2025-03-21 12:37:54.091 [INFO][4270] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5d6730b121c ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.106774 containerd[1520]: 2025-03-21 12:37:54.093 [INFO][4270] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.106774 containerd[1520]: 2025-03-21 12:37:54.093 [INFO][4270] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0", GenerateName:"calico-apiserver-69bfdb59c6-", Namespace:"calico-apiserver", SelfLink:"", UID:"6036665f-7b1f-4c78-a49e-d8843ac34b11", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"69bfdb59c6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f", Pod:"calico-apiserver-69bfdb59c6-q6fm8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5d6730b121c", MAC:"ca:47:64:e2:3f:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:54.106774 containerd[1520]: 2025-03-21 12:37:54.100 [INFO][4270] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" Namespace="calico-apiserver" Pod="calico-apiserver-69bfdb59c6-q6fm8" WorkloadEndpoint="localhost-k8s-calico--apiserver--69bfdb59c6--q6fm8-eth0" Mar 21 12:37:54.121494 containerd[1520]: time="2025-03-21T12:37:54.121357586Z" level=info msg="connecting to shim 9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71" address="unix:///run/containerd/s/2ecf7848f4a7ae31caf8155099ab554ae6cfb3fb99e419f1eb0469ce608a5097" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:54.148160 containerd[1520]: time="2025-03-21T12:37:54.148018246Z" level=info msg="connecting to shim 3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f" address="unix:///run/containerd/s/8471446d1b633682894eb49b6749d67c5c2db3f3b0d3921abcf58243a79c0a9b" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:54.152350 systemd[1]: Started cri-containerd-9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71.scope - libcontainer container 9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71. Mar 21 12:37:54.171806 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:37:54.177398 systemd[1]: Started cri-containerd-3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f.scope - libcontainer container 3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f. Mar 21 12:37:54.192807 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:37:54.271812 containerd[1520]: time="2025-03-21T12:37:54.271760981Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-crrwt,Uid:a5ee7a9f-7f1a-42aa-a7ee-2f6c15c27d6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71\"" Mar 21 12:37:54.274183 containerd[1520]: time="2025-03-21T12:37:54.274150381Z" level=info msg="CreateContainer within sandbox \"9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 21 12:37:54.349045 containerd[1520]: time="2025-03-21T12:37:54.348945151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-69bfdb59c6-q6fm8,Uid:6036665f-7b1f-4c78-a49e-d8843ac34b11,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f\"" Mar 21 12:37:54.352575 containerd[1520]: time="2025-03-21T12:37:54.352213450Z" level=info msg="CreateContainer within sandbox \"3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 21 12:37:54.372086 containerd[1520]: time="2025-03-21T12:37:54.372029495Z" level=info msg="Container e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:54.378887 containerd[1520]: time="2025-03-21T12:37:54.378860254Z" level=info msg="Container d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:54.383875 containerd[1520]: time="2025-03-21T12:37:54.383836866Z" level=info msg="CreateContainer within sandbox \"9483734f4b4edc2dd64928f133721f76569fdf981bc5ae7b9e80de124f32ef71\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da\"" Mar 21 12:37:54.385542 containerd[1520]: time="2025-03-21T12:37:54.385497898Z" level=info msg="StartContainer for \"e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da\"" Mar 21 12:37:54.386382 containerd[1520]: time="2025-03-21T12:37:54.386340317Z" level=info msg="connecting to shim e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da" address="unix:///run/containerd/s/2ecf7848f4a7ae31caf8155099ab554ae6cfb3fb99e419f1eb0469ce608a5097" protocol=ttrpc version=3 Mar 21 12:37:54.390800 containerd[1520]: time="2025-03-21T12:37:54.390737158Z" level=info msg="CreateContainer within sandbox \"3dc4db554fe2d071a836e6111c31ae0fb6be28a98c684d371d06b8b90b5cb92f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd\"" Mar 21 12:37:54.391271 containerd[1520]: time="2025-03-21T12:37:54.391242215Z" level=info msg="StartContainer for \"d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd\"" Mar 21 12:37:54.394025 containerd[1520]: time="2025-03-21T12:37:54.393959839Z" level=info msg="connecting to shim d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd" address="unix:///run/containerd/s/8471446d1b633682894eb49b6749d67c5c2db3f3b0d3921abcf58243a79c0a9b" protocol=ttrpc version=3 Mar 21 12:37:54.407359 systemd[1]: Started cri-containerd-e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da.scope - libcontainer container e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da. Mar 21 12:37:54.422392 systemd[1]: Started cri-containerd-d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd.scope - libcontainer container d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd. Mar 21 12:37:54.477953 containerd[1520]: time="2025-03-21T12:37:54.477914150Z" level=info msg="StartContainer for \"e4c5e3730ff4a4f927b3158f27620aa764b57de0325d55f0e1987aff6b2cf4da\" returns successfully" Mar 21 12:37:54.493908 containerd[1520]: time="2025-03-21T12:37:54.493862428Z" level=info msg="StartContainer for \"d84bbe7d8fa184517443e92b9b36a21071f028dfe7bb89f069895d5a159868bd\" returns successfully" Mar 21 12:37:54.652426 systemd-networkd[1424]: cali5f2c7e059b5: Gained IPv6LL Mar 21 12:37:54.717990 containerd[1520]: time="2025-03-21T12:37:54.717944505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m44lj,Uid:fd2a490d-bd62-49bd-ba85-983f4d907bf7,Namespace:calico-system,Attempt:0,}" Mar 21 12:37:54.852974 systemd-networkd[1424]: calic96ec4c54ff: Link UP Mar 21 12:37:54.854263 systemd-networkd[1424]: calic96ec4c54ff: Gained carrier Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.782 [INFO][4501] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--m44lj-eth0 csi-node-driver- calico-system fd2a490d-bd62-49bd-ba85-983f4d907bf7 616 0 2025-03-21 12:37:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:54877d75d5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-m44lj eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calic96ec4c54ff [] []}} ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.783 [INFO][4501] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.812 [INFO][4515] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" HandleID="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Workload="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.820 [INFO][4515] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" HandleID="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Workload="localhost-k8s-csi--node--driver--m44lj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd110), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-m44lj", "timestamp":"2025-03-21 12:37:54.811418833 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.820 [INFO][4515] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.820 [INFO][4515] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.820 [INFO][4515] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.822 [INFO][4515] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.825 [INFO][4515] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.829 [INFO][4515] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.830 [INFO][4515] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.832 [INFO][4515] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.832 [INFO][4515] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.835 [INFO][4515] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1 Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.839 [INFO][4515] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.846 [INFO][4515] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.846 [INFO][4515] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" host="localhost" Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.846 [INFO][4515] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 21 12:37:54.869209 containerd[1520]: 2025-03-21 12:37:54.846 [INFO][4515] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" HandleID="k8s-pod-network.51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Workload="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.871097 containerd[1520]: 2025-03-21 12:37:54.850 [INFO][4501] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m44lj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd2a490d-bd62-49bd-ba85-983f4d907bf7", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-m44lj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic96ec4c54ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:54.871097 containerd[1520]: 2025-03-21 12:37:54.850 [INFO][4501] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.871097 containerd[1520]: 2025-03-21 12:37:54.850 [INFO][4501] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic96ec4c54ff ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.871097 containerd[1520]: 2025-03-21 12:37:54.855 [INFO][4501] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.871097 containerd[1520]: 2025-03-21 12:37:54.855 [INFO][4501] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--m44lj-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"fd2a490d-bd62-49bd-ba85-983f4d907bf7", ResourceVersion:"616", Generation:0, CreationTimestamp:time.Date(2025, time.March, 21, 12, 37, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"54877d75d5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1", Pod:"csi-node-driver-m44lj", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calic96ec4c54ff", MAC:"86:0c:1e:98:de:2b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 21 12:37:54.871097 containerd[1520]: 2025-03-21 12:37:54.863 [INFO][4501] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" Namespace="calico-system" Pod="csi-node-driver-m44lj" WorkloadEndpoint="localhost-k8s-csi--node--driver--m44lj-eth0" Mar 21 12:37:54.900291 containerd[1520]: time="2025-03-21T12:37:54.900207201Z" level=info msg="connecting to shim 51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1" address="unix:///run/containerd/s/887fb9e6c7fdf439412c16897ee3c4d6a64d1f624d7c36df39d58dc998f17906" namespace=k8s.io protocol=ttrpc version=3 Mar 21 12:37:54.922456 systemd[1]: Started cri-containerd-51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1.scope - libcontainer container 51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1. Mar 21 12:37:54.934657 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Mar 21 12:37:54.950814 containerd[1520]: time="2025-03-21T12:37:54.950722506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-m44lj,Uid:fd2a490d-bd62-49bd-ba85-983f4d907bf7,Namespace:calico-system,Attempt:0,} returns sandbox id \"51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1\"" Mar 21 12:37:54.983539 kubelet[2625]: I0321 12:37:54.983472 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-69bfdb59c6-q6fm8" podStartSLOduration=27.983455538 podStartE2EDuration="27.983455538s" podCreationTimestamp="2025-03-21 12:37:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:54.971682994 +0000 UTC m=+40.338028026" watchObservedRunningTime="2025-03-21 12:37:54.983455538 +0000 UTC m=+40.349800550" Mar 21 12:37:54.983765 kubelet[2625]: I0321 12:37:54.983712 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-crrwt" podStartSLOduration=33.983707755 podStartE2EDuration="33.983707755s" podCreationTimestamp="2025-03-21 12:37:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-21 12:37:54.983402664 +0000 UTC m=+40.349747676" watchObservedRunningTime="2025-03-21 12:37:54.983707755 +0000 UTC m=+40.350052767" Mar 21 12:37:55.100390 systemd-networkd[1424]: cali263f53ff187: Gained IPv6LL Mar 21 12:37:55.484501 systemd-networkd[1424]: cali5d6730b121c: Gained IPv6LL Mar 21 12:37:55.724823 systemd[1]: Started sshd@9-10.0.0.113:22-10.0.0.1:47018.service - OpenSSH per-connection server daemon (10.0.0.1:47018). Mar 21 12:37:55.779450 sshd[4592]: Accepted publickey for core from 10.0.0.1 port 47018 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:37:55.781737 sshd-session[4592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:55.790668 systemd-logind[1503]: New session 10 of user core. Mar 21 12:37:55.795383 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 21 12:37:55.938282 sshd[4594]: Connection closed by 10.0.0.1 port 47018 Mar 21 12:37:55.938613 sshd-session[4592]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:55.950443 systemd[1]: sshd@9-10.0.0.113:22-10.0.0.1:47018.service: Deactivated successfully. Mar 21 12:37:55.953437 systemd[1]: session-10.scope: Deactivated successfully. Mar 21 12:37:55.955643 systemd-logind[1503]: Session 10 logged out. Waiting for processes to exit. Mar 21 12:37:55.956829 systemd[1]: Started sshd@10-10.0.0.113:22-10.0.0.1:47030.service - OpenSSH per-connection server daemon (10.0.0.1:47030). Mar 21 12:37:55.958987 systemd-logind[1503]: Removed session 10. Mar 21 12:37:56.001886 sshd[4608]: Accepted publickey for core from 10.0.0.1 port 47030 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:37:56.003706 sshd-session[4608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:56.009077 systemd-logind[1503]: New session 11 of user core. Mar 21 12:37:56.018496 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 21 12:37:56.267667 containerd[1520]: time="2025-03-21T12:37:56.267618544Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:56.268455 containerd[1520]: time="2025-03-21T12:37:56.268420131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.2: active requests=0, bytes read=34792912" Mar 21 12:37:56.269804 containerd[1520]: time="2025-03-21T12:37:56.269751301Z" level=info msg="ImageCreate event name:\"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:56.271607 containerd[1520]: time="2025-03-21T12:37:56.271578749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:56.272277 containerd[1520]: time="2025-03-21T12:37:56.272215248Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" with image id \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:6d1f392b747f912366ec5c60ee1130952c2c07e8ce24c53480187daa0e3364aa\", size \"36285984\" in 2.996518465s" Mar 21 12:37:56.272277 containerd[1520]: time="2025-03-21T12:37:56.272273229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.2\" returns image reference \"sha256:f6a228558381bc7de7c5296ac6c4e903cfda929899c85806367a726ef6d7ff5f\"" Mar 21 12:37:56.273061 containerd[1520]: time="2025-03-21T12:37:56.273023197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 21 12:37:56.280172 containerd[1520]: time="2025-03-21T12:37:56.280139885Z" level=info msg="CreateContainer within sandbox \"66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Mar 21 12:37:56.289065 containerd[1520]: time="2025-03-21T12:37:56.288349975Z" level=info msg="Container d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:56.299062 containerd[1520]: time="2025-03-21T12:37:56.299011947Z" level=info msg="CreateContainer within sandbox \"66316d256dedcc096f07ebbef6702e21732106364d2774ab1b6b505b60f4d1ef\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9\"" Mar 21 12:37:56.299493 containerd[1520]: time="2025-03-21T12:37:56.299460083Z" level=info msg="StartContainer for \"d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9\"" Mar 21 12:37:56.300722 containerd[1520]: time="2025-03-21T12:37:56.300697992Z" level=info msg="connecting to shim d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9" address="unix:///run/containerd/s/32685d29e298560b8bdd027d028a68fe915e740a86d000c22ee34cade4605bb1" protocol=ttrpc version=3 Mar 21 12:37:56.317370 systemd-networkd[1424]: calic96ec4c54ff: Gained IPv6LL Mar 21 12:37:56.325390 systemd[1]: Started cri-containerd-d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9.scope - libcontainer container d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9. Mar 21 12:37:56.381323 containerd[1520]: time="2025-03-21T12:37:56.379895330Z" level=info msg="StartContainer for \"d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9\" returns successfully" Mar 21 12:37:56.395312 sshd[4611]: Connection closed by 10.0.0.1 port 47030 Mar 21 12:37:56.395888 sshd-session[4608]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:56.408353 systemd[1]: sshd@10-10.0.0.113:22-10.0.0.1:47030.service: Deactivated successfully. Mar 21 12:37:56.410677 systemd[1]: session-11.scope: Deactivated successfully. Mar 21 12:37:56.413750 systemd-logind[1503]: Session 11 logged out. Waiting for processes to exit. Mar 21 12:37:56.416757 systemd[1]: Started sshd@11-10.0.0.113:22-10.0.0.1:47044.service - OpenSSH per-connection server daemon (10.0.0.1:47044). Mar 21 12:37:56.419224 systemd-logind[1503]: Removed session 11. Mar 21 12:37:56.475736 sshd[4653]: Accepted publickey for core from 10.0.0.1 port 47044 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:37:56.477398 sshd-session[4653]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:37:56.482223 systemd-logind[1503]: New session 12 of user core. Mar 21 12:37:56.490434 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 21 12:37:56.602063 sshd[4657]: Connection closed by 10.0.0.1 port 47044 Mar 21 12:37:56.602601 sshd-session[4653]: pam_unix(sshd:session): session closed for user core Mar 21 12:37:56.607075 systemd[1]: sshd@11-10.0.0.113:22-10.0.0.1:47044.service: Deactivated successfully. Mar 21 12:37:56.609449 systemd[1]: session-12.scope: Deactivated successfully. Mar 21 12:37:56.610119 systemd-logind[1503]: Session 12 logged out. Waiting for processes to exit. Mar 21 12:37:56.610961 systemd-logind[1503]: Removed session 12. Mar 21 12:37:56.976496 kubelet[2625]: I0321 12:37:56.976436 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5f8966dc87-5ddj5" podStartSLOduration=26.645583438 podStartE2EDuration="29.976420342s" podCreationTimestamp="2025-03-21 12:37:27 +0000 UTC" firstStartedPulling="2025-03-21 12:37:52.942032557 +0000 UTC m=+38.308377559" lastFinishedPulling="2025-03-21 12:37:56.272869461 +0000 UTC m=+41.639214463" observedRunningTime="2025-03-21 12:37:56.976255584 +0000 UTC m=+42.342600596" watchObservedRunningTime="2025-03-21 12:37:56.976420342 +0000 UTC m=+42.342765354" Mar 21 12:37:57.010278 containerd[1520]: time="2025-03-21T12:37:57.010210079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9\" id:\"76e18b14e129ee0ab8f685fe404d31152b914b3344fcd372e60b92fe631087da\" pid:4685 exited_at:{seconds:1742560677 nanos:9795079}" Mar 21 12:37:57.569784 containerd[1520]: time="2025-03-21T12:37:57.569730910Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:57.570471 containerd[1520]: time="2025-03-21T12:37:57.570426252Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 21 12:37:57.571515 containerd[1520]: time="2025-03-21T12:37:57.571482850Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:57.573264 containerd[1520]: time="2025-03-21T12:37:57.573203659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:57.573666 containerd[1520]: time="2025-03-21T12:37:57.573636263Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.300582096s" Mar 21 12:37:57.573698 containerd[1520]: time="2025-03-21T12:37:57.573665250Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 21 12:37:57.575456 containerd[1520]: time="2025-03-21T12:37:57.575416337Z" level=info msg="CreateContainer within sandbox \"51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 21 12:37:57.596433 containerd[1520]: time="2025-03-21T12:37:57.596392192Z" level=info msg="Container 2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:57.604239 containerd[1520]: time="2025-03-21T12:37:57.604198691Z" level=info msg="CreateContainer within sandbox \"51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060\"" Mar 21 12:37:57.604813 containerd[1520]: time="2025-03-21T12:37:57.604696782Z" level=info msg="StartContainer for \"2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060\"" Mar 21 12:37:57.606098 containerd[1520]: time="2025-03-21T12:37:57.606064139Z" level=info msg="connecting to shim 2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060" address="unix:///run/containerd/s/887fb9e6c7fdf439412c16897ee3c4d6a64d1f624d7c36df39d58dc998f17906" protocol=ttrpc version=3 Mar 21 12:37:57.628374 systemd[1]: Started cri-containerd-2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060.scope - libcontainer container 2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060. Mar 21 12:37:57.732038 containerd[1520]: time="2025-03-21T12:37:57.731996921Z" level=info msg="StartContainer for \"2bd813b3ce1139bdde5071ef28e3b309e0fee9483d42316faf03b5a51aa44060\" returns successfully" Mar 21 12:37:57.733130 containerd[1520]: time="2025-03-21T12:37:57.733086313Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 21 12:37:59.372334 containerd[1520]: time="2025-03-21T12:37:59.372268254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:59.373131 containerd[1520]: time="2025-03-21T12:37:59.373079426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 21 12:37:59.374307 containerd[1520]: time="2025-03-21T12:37:59.374262997Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:59.376502 containerd[1520]: time="2025-03-21T12:37:59.376450291Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 21 12:37:59.376924 containerd[1520]: time="2025-03-21T12:37:59.376888375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.643765672s" Mar 21 12:37:59.376924 containerd[1520]: time="2025-03-21T12:37:59.376918272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 21 12:37:59.379208 containerd[1520]: time="2025-03-21T12:37:59.379143008Z" level=info msg="CreateContainer within sandbox \"51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 21 12:37:59.390041 containerd[1520]: time="2025-03-21T12:37:59.389994173Z" level=info msg="Container d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb: CDI devices from CRI Config.CDIDevices: []" Mar 21 12:37:59.399425 containerd[1520]: time="2025-03-21T12:37:59.399391016Z" level=info msg="CreateContainer within sandbox \"51f130fa36f02a105efd6c651d7969621aca47f9baef8a63df65c7c79c56d0a1\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb\"" Mar 21 12:37:59.399918 containerd[1520]: time="2025-03-21T12:37:59.399889327Z" level=info msg="StartContainer for \"d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb\"" Mar 21 12:37:59.401457 containerd[1520]: time="2025-03-21T12:37:59.401426127Z" level=info msg="connecting to shim d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb" address="unix:///run/containerd/s/887fb9e6c7fdf439412c16897ee3c4d6a64d1f624d7c36df39d58dc998f17906" protocol=ttrpc version=3 Mar 21 12:37:59.423347 systemd[1]: Started cri-containerd-d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb.scope - libcontainer container d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb. Mar 21 12:37:59.464499 containerd[1520]: time="2025-03-21T12:37:59.464451826Z" level=info msg="StartContainer for \"d76d25d3b4933789c1010a29db8435eda95e2a9209abee5780795fb6f0e6ddeb\" returns successfully" Mar 21 12:37:59.783308 kubelet[2625]: I0321 12:37:59.783272 2625 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 21 12:37:59.783308 kubelet[2625]: I0321 12:37:59.783316 2625 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 21 12:37:59.988424 kubelet[2625]: I0321 12:37:59.987627 2625 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-m44lj" podStartSLOduration=28.562122316 podStartE2EDuration="32.987609946s" podCreationTimestamp="2025-03-21 12:37:27 +0000 UTC" firstStartedPulling="2025-03-21 12:37:54.95241563 +0000 UTC m=+40.318760642" lastFinishedPulling="2025-03-21 12:37:59.37790326 +0000 UTC m=+44.744248272" observedRunningTime="2025-03-21 12:37:59.987479245 +0000 UTC m=+45.353824257" watchObservedRunningTime="2025-03-21 12:37:59.987609946 +0000 UTC m=+45.353954958" Mar 21 12:38:01.617573 systemd[1]: Started sshd@12-10.0.0.113:22-10.0.0.1:47046.service - OpenSSH per-connection server daemon (10.0.0.1:47046). Mar 21 12:38:01.674448 sshd[4780]: Accepted publickey for core from 10.0.0.1 port 47046 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:01.676098 sshd-session[4780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:01.680167 systemd-logind[1503]: New session 13 of user core. Mar 21 12:38:01.689345 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 21 12:38:01.819752 sshd[4782]: Connection closed by 10.0.0.1 port 47046 Mar 21 12:38:01.820114 sshd-session[4780]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:01.824497 systemd[1]: sshd@12-10.0.0.113:22-10.0.0.1:47046.service: Deactivated successfully. Mar 21 12:38:01.826394 systemd[1]: session-13.scope: Deactivated successfully. Mar 21 12:38:01.827041 systemd-logind[1503]: Session 13 logged out. Waiting for processes to exit. Mar 21 12:38:01.827924 systemd-logind[1503]: Removed session 13. Mar 21 12:38:06.833370 systemd[1]: Started sshd@13-10.0.0.113:22-10.0.0.1:38764.service - OpenSSH per-connection server daemon (10.0.0.1:38764). Mar 21 12:38:06.873603 sshd[4798]: Accepted publickey for core from 10.0.0.1 port 38764 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:06.874948 sshd-session[4798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:06.878825 systemd-logind[1503]: New session 14 of user core. Mar 21 12:38:06.892355 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 21 12:38:07.000861 sshd[4800]: Connection closed by 10.0.0.1 port 38764 Mar 21 12:38:07.001277 sshd-session[4798]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:07.005400 systemd[1]: sshd@13-10.0.0.113:22-10.0.0.1:38764.service: Deactivated successfully. Mar 21 12:38:07.007519 systemd[1]: session-14.scope: Deactivated successfully. Mar 21 12:38:07.008160 systemd-logind[1503]: Session 14 logged out. Waiting for processes to exit. Mar 21 12:38:07.009207 systemd-logind[1503]: Removed session 14. Mar 21 12:38:12.017259 systemd[1]: Started sshd@14-10.0.0.113:22-10.0.0.1:38778.service - OpenSSH per-connection server daemon (10.0.0.1:38778). Mar 21 12:38:12.065929 sshd[4821]: Accepted publickey for core from 10.0.0.1 port 38778 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:12.067527 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:12.071533 systemd-logind[1503]: New session 15 of user core. Mar 21 12:38:12.081345 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 21 12:38:12.196076 sshd[4823]: Connection closed by 10.0.0.1 port 38778 Mar 21 12:38:12.196412 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:12.200938 systemd[1]: sshd@14-10.0.0.113:22-10.0.0.1:38778.service: Deactivated successfully. Mar 21 12:38:12.203042 systemd[1]: session-15.scope: Deactivated successfully. Mar 21 12:38:12.203707 systemd-logind[1503]: Session 15 logged out. Waiting for processes to exit. Mar 21 12:38:12.204648 systemd-logind[1503]: Removed session 15. Mar 21 12:38:17.208902 systemd[1]: Started sshd@15-10.0.0.113:22-10.0.0.1:42044.service - OpenSSH per-connection server daemon (10.0.0.1:42044). Mar 21 12:38:17.274791 sshd[4840]: Accepted publickey for core from 10.0.0.1 port 42044 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:17.276182 sshd-session[4840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:17.279939 systemd-logind[1503]: New session 16 of user core. Mar 21 12:38:17.289336 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 21 12:38:17.400779 sshd[4842]: Connection closed by 10.0.0.1 port 42044 Mar 21 12:38:17.401131 sshd-session[4840]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:17.409164 systemd[1]: sshd@15-10.0.0.113:22-10.0.0.1:42044.service: Deactivated successfully. Mar 21 12:38:17.411441 systemd[1]: session-16.scope: Deactivated successfully. Mar 21 12:38:17.413211 systemd-logind[1503]: Session 16 logged out. Waiting for processes to exit. Mar 21 12:38:17.414935 systemd[1]: Started sshd@16-10.0.0.113:22-10.0.0.1:42046.service - OpenSSH per-connection server daemon (10.0.0.1:42046). Mar 21 12:38:17.416374 systemd-logind[1503]: Removed session 16. Mar 21 12:38:17.476498 sshd[4854]: Accepted publickey for core from 10.0.0.1 port 42046 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:17.477936 sshd-session[4854]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:17.482043 systemd-logind[1503]: New session 17 of user core. Mar 21 12:38:17.491342 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 21 12:38:17.761791 sshd[4857]: Connection closed by 10.0.0.1 port 42046 Mar 21 12:38:17.762281 sshd-session[4854]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:17.776047 systemd[1]: sshd@16-10.0.0.113:22-10.0.0.1:42046.service: Deactivated successfully. Mar 21 12:38:17.778020 systemd[1]: session-17.scope: Deactivated successfully. Mar 21 12:38:17.779621 systemd-logind[1503]: Session 17 logged out. Waiting for processes to exit. Mar 21 12:38:17.781042 systemd[1]: Started sshd@17-10.0.0.113:22-10.0.0.1:42048.service - OpenSSH per-connection server daemon (10.0.0.1:42048). Mar 21 12:38:17.782302 systemd-logind[1503]: Removed session 17. Mar 21 12:38:17.834552 sshd[4869]: Accepted publickey for core from 10.0.0.1 port 42048 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:17.835947 sshd-session[4869]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:17.840224 systemd-logind[1503]: New session 18 of user core. Mar 21 12:38:17.847353 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 21 12:38:18.002280 containerd[1520]: time="2025-03-21T12:38:18.002217748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cc8e99ed003eea88cdde8bfa887a3287ed81e91cd370158e74d07fcc5ef84365\" id:\"e60a9cabf11cf57c72fa680b3e6b1bf0cba654c8f0430d90ea2448bb3820ad40\" pid:4891 exited_at:{seconds:1742560698 nanos:1877537}" Mar 21 12:38:18.661020 sshd[4872]: Connection closed by 10.0.0.1 port 42048 Mar 21 12:38:18.661682 sshd-session[4869]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:18.672196 systemd[1]: sshd@17-10.0.0.113:22-10.0.0.1:42048.service: Deactivated successfully. Mar 21 12:38:18.674763 systemd[1]: session-18.scope: Deactivated successfully. Mar 21 12:38:18.680448 systemd-logind[1503]: Session 18 logged out. Waiting for processes to exit. Mar 21 12:38:18.685350 systemd[1]: Started sshd@18-10.0.0.113:22-10.0.0.1:42050.service - OpenSSH per-connection server daemon (10.0.0.1:42050). Mar 21 12:38:18.687793 systemd-logind[1503]: Removed session 18. Mar 21 12:38:18.736801 sshd[4915]: Accepted publickey for core from 10.0.0.1 port 42050 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:18.738546 sshd-session[4915]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:18.742968 systemd-logind[1503]: New session 19 of user core. Mar 21 12:38:18.750349 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 21 12:38:18.959017 sshd[4918]: Connection closed by 10.0.0.1 port 42050 Mar 21 12:38:18.959570 sshd-session[4915]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:18.968922 systemd[1]: sshd@18-10.0.0.113:22-10.0.0.1:42050.service: Deactivated successfully. Mar 21 12:38:18.970981 systemd[1]: session-19.scope: Deactivated successfully. Mar 21 12:38:18.972902 systemd-logind[1503]: Session 19 logged out. Waiting for processes to exit. Mar 21 12:38:18.975159 systemd[1]: Started sshd@19-10.0.0.113:22-10.0.0.1:42056.service - OpenSSH per-connection server daemon (10.0.0.1:42056). Mar 21 12:38:18.976267 systemd-logind[1503]: Removed session 19. Mar 21 12:38:19.023784 sshd[4928]: Accepted publickey for core from 10.0.0.1 port 42056 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:19.025369 sshd-session[4928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:19.029921 systemd-logind[1503]: New session 20 of user core. Mar 21 12:38:19.039360 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 21 12:38:19.147801 sshd[4931]: Connection closed by 10.0.0.1 port 42056 Mar 21 12:38:19.148120 sshd-session[4928]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:19.152378 systemd[1]: sshd@19-10.0.0.113:22-10.0.0.1:42056.service: Deactivated successfully. Mar 21 12:38:19.154579 systemd[1]: session-20.scope: Deactivated successfully. Mar 21 12:38:19.155218 systemd-logind[1503]: Session 20 logged out. Waiting for processes to exit. Mar 21 12:38:19.156037 systemd-logind[1503]: Removed session 20. Mar 21 12:38:24.161534 systemd[1]: Started sshd@20-10.0.0.113:22-10.0.0.1:37562.service - OpenSSH per-connection server daemon (10.0.0.1:37562). Mar 21 12:38:24.213005 sshd[4951]: Accepted publickey for core from 10.0.0.1 port 37562 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:24.214437 sshd-session[4951]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:24.218514 systemd-logind[1503]: New session 21 of user core. Mar 21 12:38:24.236354 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 21 12:38:24.344981 sshd[4953]: Connection closed by 10.0.0.1 port 37562 Mar 21 12:38:24.345318 sshd-session[4951]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:24.349191 systemd[1]: sshd@20-10.0.0.113:22-10.0.0.1:37562.service: Deactivated successfully. Mar 21 12:38:24.351536 systemd[1]: session-21.scope: Deactivated successfully. Mar 21 12:38:24.352276 systemd-logind[1503]: Session 21 logged out. Waiting for processes to exit. Mar 21 12:38:24.353256 systemd-logind[1503]: Removed session 21. Mar 21 12:38:27.013704 containerd[1520]: time="2025-03-21T12:38:27.013647342Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d689856a072f95073383d17aea1c12b85e158d27f7d59883dc3aae37298c88c9\" id:\"7ee77a5b963e2cac8234a3cd9af2ef0c113e7b95358b1c092784cb74632943c6\" pid:4978 exited_at:{seconds:1742560707 nanos:13438856}" Mar 21 12:38:29.358245 systemd[1]: Started sshd@21-10.0.0.113:22-10.0.0.1:37568.service - OpenSSH per-connection server daemon (10.0.0.1:37568). Mar 21 12:38:29.405717 sshd[4995]: Accepted publickey for core from 10.0.0.1 port 37568 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:29.407205 sshd-session[4995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:29.411418 systemd-logind[1503]: New session 22 of user core. Mar 21 12:38:29.419366 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 21 12:38:29.529733 sshd[4997]: Connection closed by 10.0.0.1 port 37568 Mar 21 12:38:29.530179 sshd-session[4995]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:29.534343 systemd[1]: sshd@21-10.0.0.113:22-10.0.0.1:37568.service: Deactivated successfully. Mar 21 12:38:29.536699 systemd[1]: session-22.scope: Deactivated successfully. Mar 21 12:38:29.537521 systemd-logind[1503]: Session 22 logged out. Waiting for processes to exit. Mar 21 12:38:29.538626 systemd-logind[1503]: Removed session 22. Mar 21 12:38:34.544455 systemd[1]: Started sshd@22-10.0.0.113:22-10.0.0.1:46148.service - OpenSSH per-connection server daemon (10.0.0.1:46148). Mar 21 12:38:34.603030 sshd[5011]: Accepted publickey for core from 10.0.0.1 port 46148 ssh2: RSA SHA256:lTgMMt/0ISQMJMexy8Vr8KG+9PSByON0JAakDTVcySk Mar 21 12:38:34.604658 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 21 12:38:34.609044 systemd-logind[1503]: New session 23 of user core. Mar 21 12:38:34.614369 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 21 12:38:34.736567 sshd[5013]: Connection closed by 10.0.0.1 port 46148 Mar 21 12:38:34.737456 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Mar 21 12:38:34.740576 systemd[1]: sshd@22-10.0.0.113:22-10.0.0.1:46148.service: Deactivated successfully. Mar 21 12:38:34.743680 systemd[1]: session-23.scope: Deactivated successfully. Mar 21 12:38:34.744517 systemd-logind[1503]: Session 23 logged out. Waiting for processes to exit. Mar 21 12:38:34.745535 systemd-logind[1503]: Removed session 23.