Mar 25 01:32:58.114913 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 24 23:38:35 -00 2025 Mar 25 01:32:58.114955 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:32:58.114977 kernel: BIOS-provided physical RAM map: Mar 25 01:32:58.114990 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Mar 25 01:32:58.115002 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Mar 25 01:32:58.115015 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Mar 25 01:32:58.115031 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Mar 25 01:32:58.115045 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Mar 25 01:32:58.115063 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bd326fff] usable Mar 25 01:32:58.115078 kernel: BIOS-e820: [mem 0x00000000bd327000-0x00000000bd32efff] ACPI data Mar 25 01:32:58.115092 kernel: BIOS-e820: [mem 0x00000000bd32f000-0x00000000bf8ecfff] usable Mar 25 01:32:58.115106 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Mar 25 01:32:58.115119 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Mar 25 01:32:58.115133 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Mar 25 01:32:58.115154 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Mar 25 01:32:58.115170 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Mar 25 01:32:58.115185 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Mar 25 01:32:58.115201 kernel: NX (Execute Disable) protection: active Mar 25 01:32:58.115216 kernel: APIC: Static calls initialized Mar 25 01:32:58.115232 kernel: efi: EFI v2.7 by EDK II Mar 25 01:32:58.115270 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd327018 Mar 25 01:32:58.115293 kernel: random: crng init done Mar 25 01:32:58.115307 kernel: secureboot: Secure boot disabled Mar 25 01:32:58.115322 kernel: SMBIOS 2.4 present. Mar 25 01:32:58.115336 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Mar 25 01:32:58.115356 kernel: Hypervisor detected: KVM Mar 25 01:32:58.115369 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 25 01:32:58.115394 kernel: kvm-clock: using sched offset of 13446135513 cycles Mar 25 01:32:58.115410 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 25 01:32:58.115426 kernel: tsc: Detected 2299.998 MHz processor Mar 25 01:32:58.115441 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 25 01:32:58.115456 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 25 01:32:58.115470 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Mar 25 01:32:58.115484 kernel: MTRR map: 3 entries (2 fixed + 1 variable; max 18), built from 8 variable MTRRs Mar 25 01:32:58.115505 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 25 01:32:58.115521 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Mar 25 01:32:58.115536 kernel: Using GB pages for direct mapping Mar 25 01:32:58.115553 kernel: ACPI: Early table checksum verification disabled Mar 25 01:32:58.115569 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Mar 25 01:32:58.115586 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Mar 25 01:32:58.115611 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Mar 25 01:32:58.115634 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Mar 25 01:32:58.115652 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Mar 25 01:32:58.115669 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20240322) Mar 25 01:32:58.115688 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Mar 25 01:32:58.115705 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Mar 25 01:32:58.115721 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Mar 25 01:32:58.115738 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Mar 25 01:32:58.115760 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Mar 25 01:32:58.115777 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Mar 25 01:32:58.115794 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Mar 25 01:32:58.115811 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Mar 25 01:32:58.115828 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Mar 25 01:32:58.115846 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Mar 25 01:32:58.115864 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Mar 25 01:32:58.115881 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Mar 25 01:32:58.115899 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Mar 25 01:32:58.115920 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Mar 25 01:32:58.115937 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 25 01:32:58.115954 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 25 01:32:58.115971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 25 01:32:58.115988 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Mar 25 01:32:58.116005 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Mar 25 01:32:58.116023 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Mar 25 01:32:58.116041 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Mar 25 01:32:58.116058 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Mar 25 01:32:58.116081 kernel: Zone ranges: Mar 25 01:32:58.116099 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 25 01:32:58.116116 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Mar 25 01:32:58.116133 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Mar 25 01:32:58.116150 kernel: Movable zone start for each node Mar 25 01:32:58.116168 kernel: Early memory node ranges Mar 25 01:32:58.116185 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Mar 25 01:32:58.116202 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Mar 25 01:32:58.116219 kernel: node 0: [mem 0x0000000000100000-0x00000000bd326fff] Mar 25 01:32:58.116242 kernel: node 0: [mem 0x00000000bd32f000-0x00000000bf8ecfff] Mar 25 01:32:58.116290 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Mar 25 01:32:58.116305 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Mar 25 01:32:58.116321 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Mar 25 01:32:58.116338 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 25 01:32:58.116354 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Mar 25 01:32:58.116370 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Mar 25 01:32:58.116394 kernel: On node 0, zone DMA32: 8 pages in unavailable ranges Mar 25 01:32:58.116410 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Mar 25 01:32:58.116427 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Mar 25 01:32:58.116448 kernel: ACPI: PM-Timer IO Port: 0xb008 Mar 25 01:32:58.116465 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 25 01:32:58.116482 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 25 01:32:58.116499 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 25 01:32:58.116515 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 25 01:32:58.116531 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 25 01:32:58.116548 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 25 01:32:58.116564 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 25 01:32:58.116581 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 25 01:32:58.116602 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Mar 25 01:32:58.116619 kernel: Booting paravirtualized kernel on KVM Mar 25 01:32:58.116636 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 25 01:32:58.116653 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 25 01:32:58.116670 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 25 01:32:58.116687 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 25 01:32:58.116703 kernel: pcpu-alloc: [0] 0 1 Mar 25 01:32:58.116719 kernel: kvm-guest: PV spinlocks enabled Mar 25 01:32:58.116736 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Mar 25 01:32:58.116760 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:32:58.116783 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 25 01:32:58.116800 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Mar 25 01:32:58.116817 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Mar 25 01:32:58.116834 kernel: Fallback order for Node 0: 0 Mar 25 01:32:58.116852 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932272 Mar 25 01:32:58.116869 kernel: Policy zone: Normal Mar 25 01:32:58.116885 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 25 01:32:58.116906 kernel: software IO TLB: area num 2. Mar 25 01:32:58.116923 kernel: Memory: 7509272K/7860552K available (14336K kernel code, 2304K rwdata, 25060K rodata, 43592K init, 1472K bss, 351024K reserved, 0K cma-reserved) Mar 25 01:32:58.116940 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 25 01:32:58.116956 kernel: Kernel/User page tables isolation: enabled Mar 25 01:32:58.116972 kernel: ftrace: allocating 37985 entries in 149 pages Mar 25 01:32:58.116989 kernel: ftrace: allocated 149 pages with 4 groups Mar 25 01:32:58.117007 kernel: Dynamic Preempt: voluntary Mar 25 01:32:58.117043 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 25 01:32:58.117061 kernel: rcu: RCU event tracing is enabled. Mar 25 01:32:58.117079 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 25 01:32:58.117098 kernel: Trampoline variant of Tasks RCU enabled. Mar 25 01:32:58.117120 kernel: Rude variant of Tasks RCU enabled. Mar 25 01:32:58.117137 kernel: Tracing variant of Tasks RCU enabled. Mar 25 01:32:58.117155 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 25 01:32:58.117173 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 25 01:32:58.117191 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 25 01:32:58.117213 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 25 01:32:58.117231 kernel: Console: colour dummy device 80x25 Mar 25 01:32:58.117263 kernel: printk: console [ttyS0] enabled Mar 25 01:32:58.117280 kernel: ACPI: Core revision 20230628 Mar 25 01:32:58.117297 kernel: APIC: Switch to symmetric I/O mode setup Mar 25 01:32:58.117313 kernel: x2apic enabled Mar 25 01:32:58.117330 kernel: APIC: Switched APIC routing to: physical x2apic Mar 25 01:32:58.117347 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Mar 25 01:32:58.117363 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 25 01:32:58.117394 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Mar 25 01:32:58.117413 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Mar 25 01:32:58.117431 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Mar 25 01:32:58.117447 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 25 01:32:58.117463 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Mar 25 01:32:58.117481 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Mar 25 01:32:58.117499 kernel: Spectre V2 : Mitigation: IBRS Mar 25 01:32:58.117517 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 25 01:32:58.117536 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 25 01:32:58.117559 kernel: RETBleed: Mitigation: IBRS Mar 25 01:32:58.117577 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 25 01:32:58.117594 kernel: Spectre V2 : User space: Mitigation: STIBP via prctl Mar 25 01:32:58.117612 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 25 01:32:58.117630 kernel: MDS: Mitigation: Clear CPU buffers Mar 25 01:32:58.117648 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 25 01:32:58.117666 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 25 01:32:58.117683 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 25 01:32:58.117701 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 25 01:32:58.117723 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 25 01:32:58.117741 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 25 01:32:58.117759 kernel: Freeing SMP alternatives memory: 32K Mar 25 01:32:58.117777 kernel: pid_max: default: 32768 minimum: 301 Mar 25 01:32:58.117795 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 25 01:32:58.117812 kernel: landlock: Up and running. Mar 25 01:32:58.117830 kernel: SELinux: Initializing. Mar 25 01:32:58.117848 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:32:58.117866 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 25 01:32:58.117888 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Mar 25 01:32:58.117906 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:32:58.117924 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:32:58.117942 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 25 01:32:58.117960 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Mar 25 01:32:58.117978 kernel: signal: max sigframe size: 1776 Mar 25 01:32:58.117995 kernel: rcu: Hierarchical SRCU implementation. Mar 25 01:32:58.118013 kernel: rcu: Max phase no-delay instances is 400. Mar 25 01:32:58.118031 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 25 01:32:58.118052 kernel: smp: Bringing up secondary CPUs ... Mar 25 01:32:58.118070 kernel: smpboot: x86: Booting SMP configuration: Mar 25 01:32:58.118087 kernel: .... node #0, CPUs: #1 Mar 25 01:32:58.118107 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Mar 25 01:32:58.118126 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Mar 25 01:32:58.118144 kernel: smp: Brought up 1 node, 2 CPUs Mar 25 01:32:58.118162 kernel: smpboot: Max logical packages: 1 Mar 25 01:32:58.118180 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Mar 25 01:32:58.118202 kernel: devtmpfs: initialized Mar 25 01:32:58.118219 kernel: x86/mm: Memory block size: 128MB Mar 25 01:32:58.118237 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Mar 25 01:32:58.120300 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 25 01:32:58.120330 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 25 01:32:58.120352 kernel: pinctrl core: initialized pinctrl subsystem Mar 25 01:32:58.120381 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 25 01:32:58.120399 kernel: audit: initializing netlink subsys (disabled) Mar 25 01:32:58.120418 kernel: audit: type=2000 audit(1742866376.503:1): state=initialized audit_enabled=0 res=1 Mar 25 01:32:58.120441 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 25 01:32:58.120458 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 25 01:32:58.120476 kernel: cpuidle: using governor menu Mar 25 01:32:58.120495 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 25 01:32:58.120514 kernel: dca service started, version 1.12.1 Mar 25 01:32:58.120533 kernel: PCI: Using configuration type 1 for base access Mar 25 01:32:58.120551 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 25 01:32:58.120569 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Mar 25 01:32:58.120588 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Mar 25 01:32:58.120611 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 25 01:32:58.120627 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 25 01:32:58.120645 kernel: ACPI: Added _OSI(Module Device) Mar 25 01:32:58.120663 kernel: ACPI: Added _OSI(Processor Device) Mar 25 01:32:58.120681 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 25 01:32:58.120700 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 25 01:32:58.120718 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Mar 25 01:32:58.120738 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 25 01:32:58.120757 kernel: ACPI: Interpreter enabled Mar 25 01:32:58.120781 kernel: ACPI: PM: (supports S0 S3 S5) Mar 25 01:32:58.120800 kernel: ACPI: Using IOAPIC for interrupt routing Mar 25 01:32:58.120819 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 25 01:32:58.120837 kernel: PCI: Ignoring E820 reservations for host bridge windows Mar 25 01:32:58.120854 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Mar 25 01:32:58.120872 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 25 01:32:58.121153 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 25 01:32:58.123409 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 25 01:32:58.123635 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 25 01:32:58.123662 kernel: PCI host bridge to bus 0000:00 Mar 25 01:32:58.123872 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 25 01:32:58.124055 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 25 01:32:58.124228 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 25 01:32:58.124437 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Mar 25 01:32:58.124605 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 25 01:32:58.124816 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 25 01:32:58.125013 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Mar 25 01:32:58.125230 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 25 01:32:58.127513 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Mar 25 01:32:58.127724 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Mar 25 01:32:58.127911 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Mar 25 01:32:58.128103 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Mar 25 01:32:58.129333 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 25 01:32:58.129546 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Mar 25 01:32:58.129735 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Mar 25 01:32:58.129933 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Mar 25 01:32:58.130122 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Mar 25 01:32:58.132401 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Mar 25 01:32:58.132448 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 25 01:32:58.132470 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 25 01:32:58.132491 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 25 01:32:58.132512 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 25 01:32:58.132533 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 25 01:32:58.132553 kernel: iommu: Default domain type: Translated Mar 25 01:32:58.132574 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 25 01:32:58.132593 kernel: efivars: Registered efivars operations Mar 25 01:32:58.132613 kernel: PCI: Using ACPI for IRQ routing Mar 25 01:32:58.132638 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 25 01:32:58.132659 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Mar 25 01:32:58.132679 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Mar 25 01:32:58.132699 kernel: e820: reserve RAM buffer [mem 0xbd327000-0xbfffffff] Mar 25 01:32:58.132718 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Mar 25 01:32:58.132738 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Mar 25 01:32:58.132757 kernel: vgaarb: loaded Mar 25 01:32:58.132777 kernel: clocksource: Switched to clocksource kvm-clock Mar 25 01:32:58.132801 kernel: VFS: Disk quotas dquot_6.6.0 Mar 25 01:32:58.132822 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 25 01:32:58.132842 kernel: pnp: PnP ACPI init Mar 25 01:32:58.132861 kernel: pnp: PnP ACPI: found 7 devices Mar 25 01:32:58.132882 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 25 01:32:58.132902 kernel: NET: Registered PF_INET protocol family Mar 25 01:32:58.132922 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 25 01:32:58.132943 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Mar 25 01:32:58.132963 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 25 01:32:58.132988 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Mar 25 01:32:58.133008 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Mar 25 01:32:58.133028 kernel: TCP: Hash tables configured (established 65536 bind 65536) Mar 25 01:32:58.133048 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 25 01:32:58.133068 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Mar 25 01:32:58.133089 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 25 01:32:58.133109 kernel: NET: Registered PF_XDP protocol family Mar 25 01:32:58.133353 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 25 01:32:58.133559 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 25 01:32:58.133751 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 25 01:32:58.133916 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Mar 25 01:32:58.134120 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 25 01:32:58.134146 kernel: PCI: CLS 0 bytes, default 64 Mar 25 01:32:58.134166 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Mar 25 01:32:58.134183 kernel: software IO TLB: mapped [mem 0x00000000b7f7f000-0x00000000bbf7f000] (64MB) Mar 25 01:32:58.134199 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 25 01:32:58.134223 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Mar 25 01:32:58.134241 kernel: clocksource: Switched to clocksource tsc Mar 25 01:32:58.134274 kernel: Initialise system trusted keyrings Mar 25 01:32:58.134293 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Mar 25 01:32:58.134311 kernel: Key type asymmetric registered Mar 25 01:32:58.134328 kernel: Asymmetric key parser 'x509' registered Mar 25 01:32:58.134345 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 25 01:32:58.134363 kernel: io scheduler mq-deadline registered Mar 25 01:32:58.134389 kernel: io scheduler kyber registered Mar 25 01:32:58.134411 kernel: io scheduler bfq registered Mar 25 01:32:58.134429 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 25 01:32:58.134448 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 25 01:32:58.134654 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Mar 25 01:32:58.134679 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Mar 25 01:32:58.134869 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Mar 25 01:32:58.134895 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 25 01:32:58.135102 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Mar 25 01:32:58.135126 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 25 01:32:58.135151 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 25 01:32:58.135171 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Mar 25 01:32:58.135190 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Mar 25 01:32:58.135210 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Mar 25 01:32:58.135441 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Mar 25 01:32:58.135468 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 25 01:32:58.135488 kernel: i8042: Warning: Keylock active Mar 25 01:32:58.135506 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 25 01:32:58.135529 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 25 01:32:58.135717 kernel: rtc_cmos 00:00: RTC can wake from S4 Mar 25 01:32:58.135938 kernel: rtc_cmos 00:00: registered as rtc0 Mar 25 01:32:58.136136 kernel: rtc_cmos 00:00: setting system clock to 2025-03-25T01:32:57 UTC (1742866377) Mar 25 01:32:58.136344 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Mar 25 01:32:58.136378 kernel: intel_pstate: CPU model not supported Mar 25 01:32:58.136397 kernel: pstore: Using crash dump compression: deflate Mar 25 01:32:58.136415 kernel: pstore: Registered efi_pstore as persistent store backend Mar 25 01:32:58.136440 kernel: NET: Registered PF_INET6 protocol family Mar 25 01:32:58.136459 kernel: Segment Routing with IPv6 Mar 25 01:32:58.136477 kernel: In-situ OAM (IOAM) with IPv6 Mar 25 01:32:58.136495 kernel: NET: Registered PF_PACKET protocol family Mar 25 01:32:58.136513 kernel: Key type dns_resolver registered Mar 25 01:32:58.136531 kernel: IPI shorthand broadcast: enabled Mar 25 01:32:58.136550 kernel: sched_clock: Marking stable (860004083, 141678974)->(1028833323, -27150266) Mar 25 01:32:58.136568 kernel: registered taskstats version 1 Mar 25 01:32:58.136586 kernel: Loading compiled-in X.509 certificates Mar 25 01:32:58.136609 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: eff01054e94a599f8e404b9a9482f4e2220f5386' Mar 25 01:32:58.136627 kernel: Key type .fscrypt registered Mar 25 01:32:58.136645 kernel: Key type fscrypt-provisioning registered Mar 25 01:32:58.136663 kernel: ima: Allocated hash algorithm: sha1 Mar 25 01:32:58.136681 kernel: ima: No architecture policies found Mar 25 01:32:58.136699 kernel: clk: Disabling unused clocks Mar 25 01:32:58.136717 kernel: Freeing unused kernel image (initmem) memory: 43592K Mar 25 01:32:58.136735 kernel: Write protecting the kernel read-only data: 40960k Mar 25 01:32:58.136753 kernel: Freeing unused kernel image (rodata/data gap) memory: 1564K Mar 25 01:32:58.136775 kernel: Run /init as init process Mar 25 01:32:58.136794 kernel: with arguments: Mar 25 01:32:58.136811 kernel: /init Mar 25 01:32:58.136829 kernel: with environment: Mar 25 01:32:58.136846 kernel: HOME=/ Mar 25 01:32:58.136864 kernel: TERM=linux Mar 25 01:32:58.136882 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 25 01:32:58.136900 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Mar 25 01:32:58.136921 systemd[1]: Successfully made /usr/ read-only. Mar 25 01:32:58.136948 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:32:58.136968 systemd[1]: Detected virtualization google. Mar 25 01:32:58.136987 systemd[1]: Detected architecture x86-64. Mar 25 01:32:58.137005 systemd[1]: Running in initrd. Mar 25 01:32:58.137023 systemd[1]: No hostname configured, using default hostname. Mar 25 01:32:58.137043 systemd[1]: Hostname set to . Mar 25 01:32:58.137065 systemd[1]: Initializing machine ID from random generator. Mar 25 01:32:58.137084 systemd[1]: Queued start job for default target initrd.target. Mar 25 01:32:58.137103 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:32:58.137121 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:32:58.137143 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 25 01:32:58.137162 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:32:58.137181 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 25 01:32:58.137208 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 25 01:32:58.137244 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 25 01:32:58.137286 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 25 01:32:58.137306 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:32:58.137326 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:32:58.137346 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:32:58.137376 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:32:58.137396 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:32:58.137416 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:32:58.137436 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:32:58.137455 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:32:58.137476 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 25 01:32:58.137496 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Mar 25 01:32:58.137516 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:32:58.137536 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:32:58.137559 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:32:58.137579 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:32:58.137599 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 25 01:32:58.137619 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:32:58.137639 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 25 01:32:58.137659 systemd[1]: Starting systemd-fsck-usr.service... Mar 25 01:32:58.137679 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:32:58.137698 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:32:58.137722 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:58.137742 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 25 01:32:58.137762 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:32:58.137824 systemd-journald[184]: Collecting audit messages is disabled. Mar 25 01:32:58.137871 systemd[1]: Finished systemd-fsck-usr.service. Mar 25 01:32:58.137891 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 25 01:32:58.137912 systemd-journald[184]: Journal started Mar 25 01:32:58.137956 systemd-journald[184]: Runtime Journal (/run/log/journal/fd2e05b84dfe4fc089d4a5d04c4acf4f) is 8M, max 148.6M, 140.6M free. Mar 25 01:32:58.144343 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:58.102520 systemd-modules-load[185]: Inserted module 'overlay' Mar 25 01:32:58.149272 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:32:58.152891 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 25 01:32:58.163410 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 25 01:32:58.163447 kernel: Bridge firewalling registered Mar 25 01:32:58.162887 systemd-modules-load[185]: Inserted module 'br_netfilter' Mar 25 01:32:58.164319 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:32:58.170435 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:32:58.186086 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:32:58.189455 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:32:58.190676 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:32:58.222331 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:32:58.226671 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:32:58.231857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:32:58.239690 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:32:58.246164 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 25 01:32:58.269442 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:32:58.286152 dracut-cmdline[218]: dracut-dracut-053 Mar 25 01:32:58.291113 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=e7a00b7ee8d97e8d255663e9d3fa92277da8316702fb7f6d664fd7b137c307e9 Mar 25 01:32:58.340637 systemd-resolved[219]: Positive Trust Anchors: Mar 25 01:32:58.341215 systemd-resolved[219]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:32:58.341466 systemd-resolved[219]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:32:58.349656 systemd-resolved[219]: Defaulting to hostname 'linux'. Mar 25 01:32:58.354429 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:32:58.361496 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:32:58.400291 kernel: SCSI subsystem initialized Mar 25 01:32:58.411300 kernel: Loading iSCSI transport class v2.0-870. Mar 25 01:32:58.423358 kernel: iscsi: registered transport (tcp) Mar 25 01:32:58.446294 kernel: iscsi: registered transport (qla4xxx) Mar 25 01:32:58.446379 kernel: QLogic iSCSI HBA Driver Mar 25 01:32:58.498203 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 25 01:32:58.501119 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 25 01:32:58.548282 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 25 01:32:58.548378 kernel: device-mapper: uevent: version 1.0.3 Mar 25 01:32:58.548406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 25 01:32:58.593312 kernel: raid6: avx2x4 gen() 17884 MB/s Mar 25 01:32:58.610292 kernel: raid6: avx2x2 gen() 17919 MB/s Mar 25 01:32:58.627774 kernel: raid6: avx2x1 gen() 14008 MB/s Mar 25 01:32:58.627836 kernel: raid6: using algorithm avx2x2 gen() 17919 MB/s Mar 25 01:32:58.654522 kernel: raid6: .... xor() 18656 MB/s, rmw enabled Mar 25 01:32:58.654632 kernel: raid6: using avx2x2 recovery algorithm Mar 25 01:32:58.683297 kernel: xor: automatically using best checksumming function avx Mar 25 01:32:58.851296 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 25 01:32:58.863936 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:32:58.866445 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:32:58.928920 systemd-udevd[402]: Using default interface naming scheme 'v255'. Mar 25 01:32:58.937172 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:32:58.965509 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 25 01:32:59.010311 dracut-pre-trigger[415]: rd.md=0: removing MD RAID activation Mar 25 01:32:59.047442 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:32:59.068729 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:32:59.180371 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:32:59.194330 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 25 01:32:59.250514 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 25 01:32:59.272084 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:32:59.296454 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:32:59.352755 kernel: cryptd: max_cpu_qlen set to 1000 Mar 25 01:32:59.352797 kernel: scsi host0: Virtio SCSI HBA Mar 25 01:32:59.316389 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:32:59.389844 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Mar 25 01:32:59.340459 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 25 01:32:59.430366 kernel: AVX2 version of gcm_enc/dec engaged. Mar 25 01:32:59.430846 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:32:59.552423 kernel: AES CTR mode by8 optimization enabled Mar 25 01:32:59.552465 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Mar 25 01:32:59.552750 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Mar 25 01:32:59.553019 kernel: sd 0:0:1:0: [sda] Write Protect is off Mar 25 01:32:59.553289 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Mar 25 01:32:59.553528 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Mar 25 01:32:59.553744 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 25 01:32:59.553769 kernel: GPT:17805311 != 25165823 Mar 25 01:32:59.553792 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 25 01:32:59.553815 kernel: GPT:17805311 != 25165823 Mar 25 01:32:59.553837 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 25 01:32:59.553868 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:32:59.553892 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Mar 25 01:32:59.431061 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:32:59.449372 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:32:59.459386 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:32:59.459647 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:59.498667 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:59.501654 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:32:59.576061 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:32:59.576662 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:32:59.631379 kernel: BTRFS: device fsid 6d9424cd-1432-492b-b006-b311869817e2 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (468) Mar 25 01:32:59.646277 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by (udev-worker) (467) Mar 25 01:32:59.672218 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - PersistentDisk ROOT. Mar 25 01:32:59.694768 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:32:59.719510 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - PersistentDisk EFI-SYSTEM. Mar 25 01:32:59.741734 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 25 01:32:59.752541 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - PersistentDisk USR-A. Mar 25 01:32:59.761627 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - PersistentDisk USR-A. Mar 25 01:32:59.783716 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 25 01:32:59.830672 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 25 01:32:59.858325 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:32:59.858373 disk-uuid[543]: Primary Header is updated. Mar 25 01:32:59.858373 disk-uuid[543]: Secondary Entries is updated. Mar 25 01:32:59.858373 disk-uuid[543]: Secondary Header is updated. Mar 25 01:32:59.888289 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:32:59.918286 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:33:00.904281 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Mar 25 01:33:00.904400 disk-uuid[545]: The operation has completed successfully. Mar 25 01:33:00.990575 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 25 01:33:00.990728 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 25 01:33:01.037035 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 25 01:33:01.066501 sh[567]: Success Mar 25 01:33:01.090324 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 25 01:33:01.182645 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 25 01:33:01.187370 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 25 01:33:01.220218 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 25 01:33:01.257706 kernel: BTRFS info (device dm-0): first mount of filesystem 6d9424cd-1432-492b-b006-b311869817e2 Mar 25 01:33:01.257779 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:01.257804 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 25 01:33:01.272815 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 25 01:33:01.272898 kernel: BTRFS info (device dm-0): using free space tree Mar 25 01:33:01.308295 kernel: BTRFS info (device dm-0): enabling ssd optimizations Mar 25 01:33:01.313130 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 25 01:33:01.314110 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 25 01:33:01.315206 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 25 01:33:01.328503 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 25 01:33:01.406201 kernel: BTRFS info (device sda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:01.406330 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:01.406357 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:33:01.424202 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 25 01:33:01.424298 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:33:01.439275 kernel: BTRFS info (device sda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:01.448327 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 25 01:33:01.461470 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 25 01:33:01.561400 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:33:01.585134 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:33:01.676534 ignition[653]: Ignition 2.20.0 Mar 25 01:33:01.677412 ignition[653]: Stage: fetch-offline Mar 25 01:33:01.679985 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:33:01.677477 ignition[653]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:01.686404 systemd-networkd[746]: lo: Link UP Mar 25 01:33:01.677494 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:01.686410 systemd-networkd[746]: lo: Gained carrier Mar 25 01:33:01.677660 ignition[653]: parsed url from cmdline: "" Mar 25 01:33:01.688220 systemd-networkd[746]: Enumeration completed Mar 25 01:33:01.677668 ignition[653]: no config URL provided Mar 25 01:33:01.688803 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:01.677678 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:33:01.688811 systemd-networkd[746]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:33:01.677692 ignition[653]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:33:01.690674 systemd-networkd[746]: eth0: Link UP Mar 25 01:33:01.677705 ignition[653]: failed to fetch config: resource requires networking Mar 25 01:33:01.690681 systemd-networkd[746]: eth0: Gained carrier Mar 25 01:33:01.678008 ignition[653]: Ignition finished successfully Mar 25 01:33:01.690694 systemd-networkd[746]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:01.790177 ignition[756]: Ignition 2.20.0 Mar 25 01:33:01.705370 systemd-networkd[746]: eth0: DHCPv4 address 10.128.0.106/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 25 01:33:01.790187 ignition[756]: Stage: fetch Mar 25 01:33:01.710656 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:33:01.790403 ignition[756]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:01.729061 systemd[1]: Reached target network.target - Network. Mar 25 01:33:01.790416 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:01.744528 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 25 01:33:01.790536 ignition[756]: parsed url from cmdline: "" Mar 25 01:33:01.802600 unknown[756]: fetched base config from "system" Mar 25 01:33:01.790543 ignition[756]: no config URL provided Mar 25 01:33:01.802623 unknown[756]: fetched base config from "system" Mar 25 01:33:01.790552 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Mar 25 01:33:01.802633 unknown[756]: fetched user config from "gcp" Mar 25 01:33:01.790563 ignition[756]: no config at "/usr/lib/ignition/user.ign" Mar 25 01:33:01.806150 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 25 01:33:01.790590 ignition[756]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Mar 25 01:33:01.823332 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 25 01:33:01.794741 ignition[756]: GET result: OK Mar 25 01:33:01.876293 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 25 01:33:01.794807 ignition[756]: parsing config with SHA512: b5bbac85f8620c7821a8fb523629bf6c9d70607e031acdd78a0941da1583bfc77dc1fa19e1ac23049e8e0f1203aaeefd270c8f1c09711ccbb04d6aadd7f5e984 Mar 25 01:33:01.894262 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 25 01:33:01.803545 ignition[756]: fetch: fetch complete Mar 25 01:33:01.941295 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 25 01:33:01.804202 ignition[756]: fetch: fetch passed Mar 25 01:33:01.960643 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 25 01:33:01.804295 ignition[756]: Ignition finished successfully Mar 25 01:33:01.979544 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 25 01:33:01.873829 ignition[763]: Ignition 2.20.0 Mar 25 01:33:01.987597 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:33:01.873837 ignition[763]: Stage: kargs Mar 25 01:33:02.016407 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:33:01.874028 ignition[763]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:02.033534 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:33:01.874039 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:02.041746 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 25 01:33:01.875024 ignition[763]: kargs: kargs passed Mar 25 01:33:01.875076 ignition[763]: Ignition finished successfully Mar 25 01:33:01.938879 ignition[770]: Ignition 2.20.0 Mar 25 01:33:01.938888 ignition[770]: Stage: disks Mar 25 01:33:01.939076 ignition[770]: no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:01.939087 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:01.940076 ignition[770]: disks: disks passed Mar 25 01:33:01.940135 ignition[770]: Ignition finished successfully Mar 25 01:33:02.106068 systemd-fsck[778]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Mar 25 01:33:02.280244 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 25 01:33:02.303160 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 25 01:33:02.443283 kernel: EXT4-fs (sda9): mounted filesystem 4e6dca82-2e50-453c-be25-61f944b72008 r/w with ordered data mode. Quota mode: none. Mar 25 01:33:02.444848 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 25 01:33:02.445787 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 25 01:33:02.480823 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:33:02.493126 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 25 01:33:02.514999 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Mar 25 01:33:02.578682 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (786) Mar 25 01:33:02.578737 kernel: BTRFS info (device sda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:02.578762 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:02.578792 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:33:02.578817 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 25 01:33:02.578841 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:33:02.515113 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 25 01:33:02.515158 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:33:02.592427 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:33:02.613131 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 25 01:33:02.632218 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 25 01:33:02.772283 initrd-setup-root[810]: cut: /sysroot/etc/passwd: No such file or directory Mar 25 01:33:02.782411 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Mar 25 01:33:02.792401 initrd-setup-root[824]: cut: /sysroot/etc/shadow: No such file or directory Mar 25 01:33:02.802391 initrd-setup-root[831]: cut: /sysroot/etc/gshadow: No such file or directory Mar 25 01:33:02.942043 systemd-networkd[746]: eth0: Gained IPv6LL Mar 25 01:33:02.958388 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 25 01:33:02.960299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 25 01:33:03.002289 kernel: BTRFS info (device sda6): last unmount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:03.007347 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 25 01:33:03.016498 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 25 01:33:03.047624 ignition[898]: INFO : Ignition 2.20.0 Mar 25 01:33:03.055441 ignition[898]: INFO : Stage: mount Mar 25 01:33:03.055441 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:03.055441 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:03.055441 ignition[898]: INFO : mount: mount passed Mar 25 01:33:03.055441 ignition[898]: INFO : Ignition finished successfully Mar 25 01:33:03.051655 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 25 01:33:03.082885 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 25 01:33:03.095897 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 25 01:33:03.446891 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 25 01:33:03.492309 kernel: BTRFS: device label OEM devid 1 transid 18 /dev/sda6 scanned by mount (911) Mar 25 01:33:03.502284 kernel: BTRFS info (device sda6): first mount of filesystem a72930ba-1354-475c-94df-b83a66efea67 Mar 25 01:33:03.502373 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Mar 25 01:33:03.515881 kernel: BTRFS info (device sda6): using free space tree Mar 25 01:33:03.532120 kernel: BTRFS info (device sda6): enabling ssd optimizations Mar 25 01:33:03.532208 kernel: BTRFS info (device sda6): auto enabling async discard Mar 25 01:33:03.535483 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 25 01:33:03.575997 ignition[928]: INFO : Ignition 2.20.0 Mar 25 01:33:03.575997 ignition[928]: INFO : Stage: files Mar 25 01:33:03.590451 ignition[928]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:03.590451 ignition[928]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:03.590451 ignition[928]: DEBUG : files: compiled without relabeling support, skipping Mar 25 01:33:03.590451 ignition[928]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 25 01:33:03.590451 ignition[928]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 25 01:33:03.647403 ignition[928]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 25 01:33:03.647403 ignition[928]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 25 01:33:03.647403 ignition[928]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 25 01:33:03.647403 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:33:03.647403 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Mar 25 01:33:03.592991 unknown[928]: wrote ssh authorized keys file for user: core Mar 25 01:33:03.727416 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Mar 25 01:33:03.917325 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:33:03.934522 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 25 01:33:04.230860 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Mar 25 01:33:04.885552 ignition[928]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 25 01:33:04.885552 ignition[928]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Mar 25 01:33:04.904614 ignition[928]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:33:04.904614 ignition[928]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Mar 25 01:33:04.904614 ignition[928]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Mar 25 01:33:04.904614 ignition[928]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Mar 25 01:33:04.904614 ignition[928]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Mar 25 01:33:04.904614 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:33:04.904614 ignition[928]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 25 01:33:04.904614 ignition[928]: INFO : files: files passed Mar 25 01:33:04.904614 ignition[928]: INFO : Ignition finished successfully Mar 25 01:33:04.894027 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 25 01:33:04.926228 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 25 01:33:04.970022 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 25 01:33:04.982077 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 25 01:33:05.145575 initrd-setup-root-after-ignition[957]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:33:05.145575 initrd-setup-root-after-ignition[957]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:33:04.982217 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 25 01:33:05.194649 initrd-setup-root-after-ignition[961]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 25 01:33:05.045020 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:33:05.060099 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 25 01:33:05.078046 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 25 01:33:05.167461 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 25 01:33:05.167599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 25 01:33:05.185576 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 25 01:33:05.204588 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 25 01:33:05.225689 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 25 01:33:05.227071 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 25 01:33:05.290727 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:33:05.304011 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 25 01:33:05.359671 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:33:05.389984 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:33:05.410786 systemd[1]: Stopped target timers.target - Timer Units. Mar 25 01:33:05.428784 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 25 01:33:05.429035 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 25 01:33:05.455816 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 25 01:33:05.476873 systemd[1]: Stopped target basic.target - Basic System. Mar 25 01:33:05.494773 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 25 01:33:05.512801 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 25 01:33:05.533757 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 25 01:33:05.554798 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 25 01:33:05.574863 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 25 01:33:05.595742 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 25 01:33:05.616857 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 25 01:33:05.636690 systemd[1]: Stopped target swap.target - Swaps. Mar 25 01:33:05.654795 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 25 01:33:05.655044 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 25 01:33:05.679843 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:33:05.699828 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:33:05.720832 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 25 01:33:05.721089 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:33:05.742815 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 25 01:33:05.743072 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 25 01:33:05.774897 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 25 01:33:05.775184 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 25 01:33:05.794889 systemd[1]: ignition-files.service: Deactivated successfully. Mar 25 01:33:05.795114 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 25 01:33:05.816221 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 25 01:33:05.884651 ignition[982]: INFO : Ignition 2.20.0 Mar 25 01:33:05.884651 ignition[982]: INFO : Stage: umount Mar 25 01:33:05.884651 ignition[982]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 25 01:33:05.884651 ignition[982]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Mar 25 01:33:05.884651 ignition[982]: INFO : umount: umount passed Mar 25 01:33:05.884651 ignition[982]: INFO : Ignition finished successfully Mar 25 01:33:05.842507 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 25 01:33:05.842902 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:33:05.857889 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 25 01:33:05.892460 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 25 01:33:05.892954 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:33:05.904843 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 25 01:33:05.905072 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 25 01:33:05.962710 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 25 01:33:05.964109 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 25 01:33:05.964233 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 25 01:33:05.980128 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 25 01:33:05.980245 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 25 01:33:06.005236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 25 01:33:06.005427 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 25 01:33:06.022707 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 25 01:33:06.022771 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 25 01:33:06.030687 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 25 01:33:06.030753 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 25 01:33:06.048681 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 25 01:33:06.048746 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 25 01:33:06.065717 systemd[1]: Stopped target network.target - Network. Mar 25 01:33:06.082657 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 25 01:33:06.082748 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 25 01:33:06.108610 systemd[1]: Stopped target paths.target - Path Units. Mar 25 01:33:06.116621 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 25 01:33:06.120381 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:33:06.142523 systemd[1]: Stopped target slices.target - Slice Units. Mar 25 01:33:06.150566 systemd[1]: Stopped target sockets.target - Socket Units. Mar 25 01:33:06.165721 systemd[1]: iscsid.socket: Deactivated successfully. Mar 25 01:33:06.165786 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 25 01:33:06.180697 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 25 01:33:06.180758 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 25 01:33:06.198679 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 25 01:33:06.198760 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 25 01:33:06.215709 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 25 01:33:06.215782 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 25 01:33:06.232694 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 25 01:33:06.232773 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 25 01:33:06.249877 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 25 01:33:06.276583 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 25 01:33:06.294928 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 25 01:33:06.295060 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 25 01:33:06.306173 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Mar 25 01:33:06.306484 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 25 01:33:06.306611 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 25 01:33:06.331284 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Mar 25 01:33:06.332847 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 25 01:33:06.332901 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:33:06.340686 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 25 01:33:06.366388 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 25 01:33:06.366507 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 25 01:33:06.377610 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 25 01:33:06.824446 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Mar 25 01:33:06.377683 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:33:06.385713 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 25 01:33:06.385777 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 25 01:33:06.403699 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 25 01:33:06.403776 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:33:06.435837 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:33:06.464795 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Mar 25 01:33:06.464890 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:33:06.465456 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 25 01:33:06.465622 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:33:06.484760 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 25 01:33:06.484902 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 25 01:33:06.500521 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 25 01:33:06.500608 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:33:06.510464 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 25 01:33:06.510563 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 25 01:33:06.538608 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 25 01:33:06.538717 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 25 01:33:06.565396 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 25 01:33:06.565512 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 25 01:33:06.597640 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 25 01:33:06.606566 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 25 01:33:06.606653 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:33:06.653795 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 25 01:33:06.653887 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:06.673848 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Mar 25 01:33:06.673935 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Mar 25 01:33:06.674513 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 25 01:33:06.674641 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 25 01:33:06.692921 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 25 01:33:06.693041 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 25 01:33:06.715957 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 25 01:33:06.733539 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 25 01:33:06.775742 systemd[1]: Switching root. Mar 25 01:33:07.175506 systemd-journald[184]: Journal stopped Mar 25 01:33:09.730259 kernel: SELinux: policy capability network_peer_controls=1 Mar 25 01:33:09.730305 kernel: SELinux: policy capability open_perms=1 Mar 25 01:33:09.730333 kernel: SELinux: policy capability extended_socket_class=1 Mar 25 01:33:09.730350 kernel: SELinux: policy capability always_check_network=0 Mar 25 01:33:09.730368 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 25 01:33:09.730385 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 25 01:33:09.730407 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 25 01:33:09.730424 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 25 01:33:09.730447 kernel: audit: type=1403 audit(1742866387.394:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 25 01:33:09.730469 systemd[1]: Successfully loaded SELinux policy in 93.535ms. Mar 25 01:33:09.730491 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 14.491ms. Mar 25 01:33:09.730513 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Mar 25 01:33:09.730532 systemd[1]: Detected virtualization google. Mar 25 01:33:09.730552 systemd[1]: Detected architecture x86-64. Mar 25 01:33:09.730578 systemd[1]: Detected first boot. Mar 25 01:33:09.730600 systemd[1]: Initializing machine ID from random generator. Mar 25 01:33:09.730621 zram_generator::config[1025]: No configuration found. Mar 25 01:33:09.730642 kernel: Guest personality initialized and is inactive Mar 25 01:33:09.730664 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Mar 25 01:33:09.730687 kernel: Initialized host personality Mar 25 01:33:09.730706 kernel: NET: Registered PF_VSOCK protocol family Mar 25 01:33:09.730725 systemd[1]: Populated /etc with preset unit settings. Mar 25 01:33:09.730747 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Mar 25 01:33:09.730768 systemd[1]: initrd-switch-root.service: Deactivated successfully. Mar 25 01:33:09.730788 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Mar 25 01:33:09.730808 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Mar 25 01:33:09.730829 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 25 01:33:09.730850 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 25 01:33:09.730876 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 25 01:33:09.730898 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 25 01:33:09.730919 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 25 01:33:09.730940 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 25 01:33:09.730961 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 25 01:33:09.730982 systemd[1]: Created slice user.slice - User and Session Slice. Mar 25 01:33:09.731003 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 25 01:33:09.731035 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 25 01:33:09.731057 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 25 01:33:09.731078 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 25 01:33:09.731100 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 25 01:33:09.731123 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 25 01:33:09.731151 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 25 01:33:09.731172 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 25 01:33:09.731194 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Mar 25 01:33:09.731219 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Mar 25 01:33:09.731241 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Mar 25 01:33:09.731275 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 25 01:33:09.731297 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 25 01:33:09.731318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 25 01:33:09.731340 systemd[1]: Reached target slices.target - Slice Units. Mar 25 01:33:09.731361 systemd[1]: Reached target swap.target - Swaps. Mar 25 01:33:09.731382 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 25 01:33:09.731409 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 25 01:33:09.731430 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Mar 25 01:33:09.731452 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 25 01:33:09.731475 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 25 01:33:09.731501 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 25 01:33:09.731523 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 25 01:33:09.731544 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 25 01:33:09.731566 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 25 01:33:09.731588 systemd[1]: Mounting media.mount - External Media Directory... Mar 25 01:33:09.731611 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:09.731634 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 25 01:33:09.731656 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 25 01:33:09.731678 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 25 01:33:09.731705 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 25 01:33:09.731727 systemd[1]: Reached target machines.target - Containers. Mar 25 01:33:09.731749 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 25 01:33:09.731771 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:33:09.731793 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 25 01:33:09.731815 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 25 01:33:09.731837 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:33:09.731859 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:33:09.731885 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:33:09.731907 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 25 01:33:09.731929 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:33:09.731951 kernel: fuse: init (API version 7.39) Mar 25 01:33:09.731973 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 25 01:33:09.731995 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Mar 25 01:33:09.732017 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Mar 25 01:33:09.732048 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Mar 25 01:33:09.732074 kernel: ACPI: bus type drm_connector registered Mar 25 01:33:09.732096 systemd[1]: Stopped systemd-fsck-usr.service. Mar 25 01:33:09.732118 kernel: loop: module loaded Mar 25 01:33:09.732139 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:33:09.732161 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 25 01:33:09.732183 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 25 01:33:09.732206 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 25 01:33:09.732228 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 25 01:33:09.732301 systemd-journald[1113]: Collecting audit messages is disabled. Mar 25 01:33:09.732347 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Mar 25 01:33:09.732381 systemd-journald[1113]: Journal started Mar 25 01:33:09.732430 systemd-journald[1113]: Runtime Journal (/run/log/journal/c93d45668005459d94a6db86ef0af00b) is 8M, max 148.6M, 140.6M free. Mar 25 01:33:08.478615 systemd[1]: Queued start job for default target multi-user.target. Mar 25 01:33:08.491077 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Mar 25 01:33:08.491739 systemd[1]: systemd-journald.service: Deactivated successfully. Mar 25 01:33:09.773485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 25 01:33:09.773592 systemd[1]: verity-setup.service: Deactivated successfully. Mar 25 01:33:09.780212 systemd[1]: Stopped verity-setup.service. Mar 25 01:33:09.810286 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:09.830304 systemd[1]: Started systemd-journald.service - Journal Service. Mar 25 01:33:09.841955 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 25 01:33:09.852651 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 25 01:33:09.862652 systemd[1]: Mounted media.mount - External Media Directory. Mar 25 01:33:09.873701 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 25 01:33:09.883630 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 25 01:33:09.894589 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 25 01:33:09.904906 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 25 01:33:09.916904 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 25 01:33:09.928769 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 25 01:33:09.929049 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 25 01:33:09.940769 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:33:09.941067 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:33:09.952797 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:33:09.953074 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:33:09.962762 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:33:09.963044 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:33:09.974803 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 25 01:33:09.975088 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 25 01:33:09.985815 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:33:09.986109 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:33:09.996886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 25 01:33:10.007853 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 25 01:33:10.020851 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 25 01:33:10.032861 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Mar 25 01:33:10.044825 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 25 01:33:10.070695 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 25 01:33:10.082449 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 25 01:33:10.107393 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 25 01:33:10.117449 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 25 01:33:10.117524 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 25 01:33:10.128837 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Mar 25 01:33:10.142803 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 25 01:33:10.162217 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 25 01:33:10.173596 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:33:10.180629 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 25 01:33:10.195911 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 25 01:33:10.207478 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:33:10.211401 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 25 01:33:10.220468 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:33:10.224202 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 25 01:33:10.238438 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 25 01:33:10.240422 systemd-journald[1113]: Time spent on flushing to /var/log/journal/c93d45668005459d94a6db86ef0af00b is 55.148ms for 946 entries. Mar 25 01:33:10.240422 systemd-journald[1113]: System Journal (/var/log/journal/c93d45668005459d94a6db86ef0af00b) is 8M, max 584.8M, 576.8M free. Mar 25 01:33:10.337031 systemd-journald[1113]: Received client request to flush runtime journal. Mar 25 01:33:10.337101 kernel: loop0: detected capacity change from 0 to 52016 Mar 25 01:33:10.277777 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 25 01:33:10.295633 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 25 01:33:10.309841 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 25 01:33:10.321606 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 25 01:33:10.339190 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 25 01:33:10.352139 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 25 01:33:10.364193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 25 01:33:10.376083 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 25 01:33:10.410161 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 25 01:33:10.419833 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 25 01:33:10.431431 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Mar 25 01:33:10.444125 udevadm[1151]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 25 01:33:10.463302 kernel: loop1: detected capacity change from 0 to 151640 Mar 25 01:33:10.490004 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 25 01:33:10.502319 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 25 01:33:10.505035 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Mar 25 01:33:10.522751 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 25 01:33:10.566299 kernel: loop2: detected capacity change from 0 to 210664 Mar 25 01:33:10.589098 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 25 01:33:10.589135 systemd-tmpfiles[1168]: ACLs are not supported, ignoring. Mar 25 01:33:10.604791 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 25 01:33:10.714311 kernel: loop3: detected capacity change from 0 to 109808 Mar 25 01:33:10.795736 kernel: loop4: detected capacity change from 0 to 52016 Mar 25 01:33:10.831279 kernel: loop5: detected capacity change from 0 to 151640 Mar 25 01:33:10.891279 kernel: loop6: detected capacity change from 0 to 210664 Mar 25 01:33:10.940332 kernel: loop7: detected capacity change from 0 to 109808 Mar 25 01:33:10.978444 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-gce'. Mar 25 01:33:10.979495 (sd-merge)[1174]: Merged extensions into '/usr'. Mar 25 01:33:10.990914 systemd[1]: Reload requested from client PID 1149 ('systemd-sysext') (unit systemd-sysext.service)... Mar 25 01:33:10.991410 systemd[1]: Reloading... Mar 25 01:33:11.171620 zram_generator::config[1198]: No configuration found. Mar 25 01:33:11.412146 ldconfig[1144]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 25 01:33:11.452831 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:33:11.606047 systemd[1]: Reloading finished in 613 ms. Mar 25 01:33:11.624669 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 25 01:33:11.635040 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 25 01:33:11.660484 systemd[1]: Starting ensure-sysext.service... Mar 25 01:33:11.670168 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 25 01:33:11.708460 systemd[1]: Reload requested from client PID 1242 ('systemctl') (unit ensure-sysext.service)... Mar 25 01:33:11.708484 systemd[1]: Reloading... Mar 25 01:33:11.739006 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 25 01:33:11.740157 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 25 01:33:11.741406 systemd-tmpfiles[1243]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 25 01:33:11.741829 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Mar 25 01:33:11.741959 systemd-tmpfiles[1243]: ACLs are not supported, ignoring. Mar 25 01:33:11.749908 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:33:11.749928 systemd-tmpfiles[1243]: Skipping /boot Mar 25 01:33:11.775208 systemd-tmpfiles[1243]: Detected autofs mount point /boot during canonicalization of boot. Mar 25 01:33:11.775237 systemd-tmpfiles[1243]: Skipping /boot Mar 25 01:33:11.849281 zram_generator::config[1272]: No configuration found. Mar 25 01:33:11.994195 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:33:12.088103 systemd[1]: Reloading finished in 378 ms. Mar 25 01:33:12.105179 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 25 01:33:12.136981 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 25 01:33:12.157927 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:33:12.174741 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 25 01:33:12.190551 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 25 01:33:12.210449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 25 01:33:12.224463 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 25 01:33:12.235982 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 25 01:33:12.256472 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:12.256848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:33:12.260633 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:33:12.275945 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:33:12.294294 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:33:12.304544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:33:12.304792 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:33:12.308736 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 25 01:33:12.318369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:12.323268 augenrules[1343]: No rules Mar 25 01:33:12.326979 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:33:12.328502 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:33:12.339679 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 25 01:33:12.354822 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:33:12.355158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:33:12.360874 systemd-udevd[1331]: Using default interface naming scheme 'v255'. Mar 25 01:33:12.367877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:33:12.368314 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:33:12.380192 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:33:12.380515 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:33:12.392234 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 25 01:33:12.404348 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 25 01:33:12.437672 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:12.438116 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:33:12.443371 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:33:12.456594 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:33:12.470326 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 25 01:33:12.480511 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:33:12.481522 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:33:12.485614 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 25 01:33:12.495419 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:33:12.495636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:12.500574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 25 01:33:12.514546 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 25 01:33:12.527586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:33:12.528013 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:33:12.540208 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:33:12.540781 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:33:12.586157 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 25 01:33:12.586491 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 25 01:33:12.604358 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 25 01:33:12.615342 systemd[1]: Finished ensure-sysext.service. Mar 25 01:33:12.623670 systemd-resolved[1326]: Positive Trust Anchors: Mar 25 01:33:12.624225 systemd-resolved[1326]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 25 01:33:12.624454 systemd-resolved[1326]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 25 01:33:12.649471 systemd[1]: Condition check resulted in dev-tpmrm0.device - /dev/tpmrm0 being skipped. Mar 25 01:33:12.649954 systemd-resolved[1326]: Defaulting to hostname 'linux'. Mar 25 01:33:12.651048 systemd[1]: Reached target tpm2.target - Trusted Platform Module. Mar 25 01:33:12.661512 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:12.665022 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:33:12.673646 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 25 01:33:12.676748 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 25 01:33:12.691415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 25 01:33:12.706581 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 25 01:33:12.724760 systemd[1]: Starting setup-oem.service - Setup OEM... Mar 25 01:33:12.734585 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 25 01:33:12.734877 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Mar 25 01:33:12.748629 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 25 01:33:12.759440 systemd[1]: Reached target time-set.target - System Time Set. Mar 25 01:33:12.777144 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Mar 25 01:33:12.776108 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 25 01:33:12.776163 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 25 01:33:12.776794 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 25 01:33:12.784801 augenrules[1390]: /sbin/augenrules: No change Mar 25 01:33:12.787581 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 25 01:33:12.788654 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 25 01:33:12.805853 kernel: ACPI: button: Power Button [PWRF] Mar 25 01:33:12.814525 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 25 01:33:12.814895 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 25 01:33:12.824306 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Mar 25 01:33:12.825425 augenrules[1415]: No rules Mar 25 01:33:12.833004 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:33:12.833355 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:33:12.837293 kernel: ACPI: button: Sleep Button [SLPF] Mar 25 01:33:12.846954 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 25 01:33:12.847297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 25 01:33:12.875875 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Mar 25 01:33:12.879339 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input5 Mar 25 01:33:12.884933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 25 01:33:12.897274 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Mar 25 01:33:12.906614 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 25 01:33:12.906723 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 25 01:33:12.907516 systemd[1]: Finished setup-oem.service - Setup OEM. Mar 25 01:33:12.961289 kernel: EDAC MC: Ver: 3.0.0 Mar 25 01:33:13.012058 systemd[1]: Starting oem-gce-enable-oslogin.service - Enable GCE OS Login... Mar 25 01:33:13.017552 systemd-networkd[1399]: lo: Link UP Mar 25 01:33:13.017566 systemd-networkd[1399]: lo: Gained carrier Mar 25 01:33:13.022171 systemd-networkd[1399]: Enumeration completed Mar 25 01:33:13.022698 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 25 01:33:13.024737 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:13.024746 systemd-networkd[1399]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 25 01:33:13.025398 systemd-networkd[1399]: eth0: Link UP Mar 25 01:33:13.025407 systemd-networkd[1399]: eth0: Gained carrier Mar 25 01:33:13.025434 systemd-networkd[1399]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 25 01:33:13.035113 systemd[1]: Reached target network.target - Network. Mar 25 01:33:13.040777 systemd-networkd[1399]: eth0: DHCPv4 address 10.128.0.106/32, gateway 10.128.0.1 acquired from 169.254.169.254 Mar 25 01:33:13.054846 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1371) Mar 25 01:33:13.061184 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Mar 25 01:33:13.076300 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 25 01:33:13.091010 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 25 01:33:13.137325 kernel: mousedev: PS/2 mouse device common for all mice Mar 25 01:33:13.167040 systemd[1]: Finished oem-gce-enable-oslogin.service - Enable GCE OS Login. Mar 25 01:33:13.192185 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Mar 25 01:33:13.209644 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - PersistentDisk OEM. Mar 25 01:33:13.210395 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 25 01:33:13.213669 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 25 01:33:13.216136 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 25 01:33:13.241920 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 25 01:33:13.249768 lvm[1456]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:33:13.283783 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 25 01:33:13.284971 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 25 01:33:13.288666 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 25 01:33:13.318142 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 25 01:33:13.323553 lvm[1463]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 25 01:33:13.329941 systemd[1]: Reached target sysinit.target - System Initialization. Mar 25 01:33:13.340637 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 25 01:33:13.352479 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 25 01:33:13.363599 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 25 01:33:13.373647 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 25 01:33:13.385427 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 25 01:33:13.396448 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 25 01:33:13.396511 systemd[1]: Reached target paths.target - Path Units. Mar 25 01:33:13.405423 systemd[1]: Reached target timers.target - Timer Units. Mar 25 01:33:13.416039 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 25 01:33:13.427234 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 25 01:33:13.438017 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Mar 25 01:33:13.449678 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Mar 25 01:33:13.461436 systemd[1]: Reached target ssh-access.target - SSH Access Available. Mar 25 01:33:13.482179 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 25 01:33:13.493922 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Mar 25 01:33:13.506564 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 25 01:33:13.518698 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 25 01:33:13.529299 systemd[1]: Reached target sockets.target - Socket Units. Mar 25 01:33:13.539422 systemd[1]: Reached target basic.target - Basic System. Mar 25 01:33:13.547491 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:33:13.547546 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 25 01:33:13.549281 systemd[1]: Starting containerd.service - containerd container runtime... Mar 25 01:33:13.573112 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 25 01:33:13.590285 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 25 01:33:13.605897 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 25 01:33:13.624225 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 25 01:33:13.635398 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 25 01:33:13.638571 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 25 01:33:13.652981 systemd[1]: Started ntpd.service - Network Time Service. Mar 25 01:33:13.662971 jq[1471]: false Mar 25 01:33:13.663612 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Mar 25 01:33:13.675579 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 25 01:33:13.690881 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 25 01:33:13.709906 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 25 01:33:13.721573 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Mar 25 01:33:13.723128 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 25 01:33:13.729551 systemd[1]: Starting update-engine.service - Update Engine... Mar 25 01:33:13.747483 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 25 01:33:13.757899 extend-filesystems[1472]: Found loop4 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found loop5 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found loop6 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found loop7 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda1 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda2 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda3 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found usr Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda4 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda6 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda7 Mar 25 01:33:13.757899 extend-filesystems[1472]: Found sda9 Mar 25 01:33:13.757899 extend-filesystems[1472]: Checking size of /dev/sda9 Mar 25 01:33:13.921658 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Mar 25 01:33:13.921936 update_engine[1483]: I20250325 01:33:13.779369 1483 main.cc:92] Flatcar Update Engine starting Mar 25 01:33:13.921936 update_engine[1483]: I20250325 01:33:13.815905 1483 update_check_scheduler.cc:74] Next update check in 10m13s Mar 25 01:33:13.785872 dbus-daemon[1470]: [system] SELinux support is enabled Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.763 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetch successful Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetch successful Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetch successful Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Mar 25 01:33:13.922721 coreos-metadata[1469]: Mar 25 01:33:13.790 INFO Fetch successful Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: ---------------------------------------------------- Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: corporation. Support and training for ntp-4 are Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: available at https://www.nwtime.org/support Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: ---------------------------------------------------- Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: proto: precision = 0.100 usec (-23) Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: basedate set to 2025-03-12 Mar 25 01:33:13.923092 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: gps base set to 2025-03-16 (week 2358) Mar 25 01:33:13.932439 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Mar 25 01:33:13.984622 extend-filesystems[1472]: Resized partition /dev/sda9 Mar 25 01:33:13.768676 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 25 01:33:13.805453 dbus-daemon[1470]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1399 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Listen normally on 3 eth0 10.128.0.106:123 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Listen normally on 4 lo [::1]:123 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: bind(21) AF_INET6 fe80::4001:aff:fe80:6a%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6a%2#123 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: failed to init interface for address fe80::4001:aff:fe80:6a%2 Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: Listening on routing socket on fd #21 for interface updates Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:14.014984 ntpd[1476]: 25 Mar 01:33:13 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:14.015703 extend-filesystems[1503]: resize2fs 1.47.2 (1-Jan-2025) Mar 25 01:33:14.015703 extend-filesystems[1503]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Mar 25 01:33:14.015703 extend-filesystems[1503]: old_desc_blocks = 1, new_desc_blocks = 2 Mar 25 01:33:14.015703 extend-filesystems[1503]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Mar 25 01:33:14.103418 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1374) Mar 25 01:33:13.769016 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 25 01:33:13.851541 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.systemd1' Mar 25 01:33:14.103872 extend-filesystems[1472]: Resized filesystem in /dev/sda9 Mar 25 01:33:13.781835 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 25 01:33:13.900217 ntpd[1476]: ntpd 4.2.8p17@1.4004-o Mon Mar 24 23:09:41 UTC 2025 (1): Starting Mar 25 01:33:14.116917 jq[1487]: true Mar 25 01:33:13.782398 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 25 01:33:13.900267 ntpd[1476]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Mar 25 01:33:13.803717 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 25 01:33:13.904866 ntpd[1476]: ---------------------------------------------------- Mar 25 01:33:13.822002 systemd[1]: motdgen.service: Deactivated successfully. Mar 25 01:33:13.904888 ntpd[1476]: ntp-4 is maintained by Network Time Foundation, Mar 25 01:33:13.823332 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 25 01:33:13.904903 ntpd[1476]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Mar 25 01:33:13.854210 systemd[1]: Started update-engine.service - Update Engine. Mar 25 01:33:13.904919 ntpd[1476]: corporation. Support and training for ntp-4 are Mar 25 01:33:14.139070 jq[1512]: true Mar 25 01:33:13.893413 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 25 01:33:13.904942 ntpd[1476]: available at https://www.nwtime.org/support Mar 25 01:33:14.151769 tar[1495]: linux-amd64/helm Mar 25 01:33:13.893473 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 25 01:33:13.904958 ntpd[1476]: ---------------------------------------------------- Mar 25 01:33:13.949829 (ntainerd)[1510]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 25 01:33:13.911950 ntpd[1476]: proto: precision = 0.100 usec (-23) Mar 25 01:33:13.984380 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Mar 25 01:33:13.914640 ntpd[1476]: basedate set to 2025-03-12 Mar 25 01:33:13.992970 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 25 01:33:13.915116 ntpd[1476]: gps base set to 2025-03-16 (week 2358) Mar 25 01:33:13.993015 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 25 01:33:13.933074 ntpd[1476]: Listen and drop on 0 v6wildcard [::]:123 Mar 25 01:33:14.020055 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 25 01:33:13.933155 ntpd[1476]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Mar 25 01:33:14.030129 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 25 01:33:13.933433 ntpd[1476]: Listen normally on 2 lo 127.0.0.1:123 Mar 25 01:33:14.032547 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 25 01:33:13.933484 ntpd[1476]: Listen normally on 3 eth0 10.128.0.106:123 Mar 25 01:33:14.060823 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 25 01:33:13.933539 ntpd[1476]: Listen normally on 4 lo [::1]:123 Mar 25 01:33:13.933595 ntpd[1476]: bind(21) AF_INET6 fe80::4001:aff:fe80:6a%2#123 flags 0x11 failed: Cannot assign requested address Mar 25 01:33:13.933624 ntpd[1476]: unable to create socket on eth0 (5) for fe80::4001:aff:fe80:6a%2#123 Mar 25 01:33:13.933645 ntpd[1476]: failed to init interface for address fe80::4001:aff:fe80:6a%2 Mar 25 01:33:13.933686 ntpd[1476]: Listening on routing socket on fd #21 for interface updates Mar 25 01:33:13.954331 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:13.954373 ntpd[1476]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Mar 25 01:33:14.210122 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 25 01:33:14.322874 systemd-logind[1481]: Watching system buttons on /dev/input/event1 (Power Button) Mar 25 01:33:14.322923 systemd-logind[1481]: Watching system buttons on /dev/input/event2 (Sleep Button) Mar 25 01:33:14.322953 systemd-logind[1481]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 25 01:33:14.326753 bash[1539]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:33:14.329455 systemd-logind[1481]: New seat seat0. Mar 25 01:33:14.332175 systemd[1]: Started systemd-logind.service - User Login Management. Mar 25 01:33:14.343051 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 25 01:33:14.366776 systemd[1]: Starting sshkeys.service... Mar 25 01:33:14.379843 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Mar 25 01:33:14.383026 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.hostname1' Mar 25 01:33:14.383780 dbus-daemon[1470]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1515 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Mar 25 01:33:14.399736 systemd[1]: Starting polkit.service - Authorization Manager... Mar 25 01:33:14.401351 systemd-networkd[1399]: eth0: Gained IPv6LL Mar 25 01:33:14.409542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 25 01:33:14.427672 systemd[1]: Reached target network-online.target - Network is Online. Mar 25 01:33:14.445456 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:14.461346 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 25 01:33:14.471712 systemd[1]: Starting oem-gce.service - GCE Linux Agent... Mar 25 01:33:14.509723 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 25 01:33:14.525500 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 25 01:33:14.545012 locksmithd[1517]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 25 01:33:14.558749 init.sh[1555]: + '[' -e /etc/default/instance_configs.cfg.template ']' Mar 25 01:33:14.560664 init.sh[1555]: + echo -e '[InstanceSetup]\nset_host_keys = false' Mar 25 01:33:14.560664 init.sh[1555]: + /usr/bin/google_instance_setup Mar 25 01:33:14.567157 polkitd[1546]: Started polkitd version 121 Mar 25 01:33:14.603983 polkitd[1546]: Loading rules from directory /etc/polkit-1/rules.d Mar 25 01:33:14.604085 polkitd[1546]: Loading rules from directory /usr/share/polkit-1/rules.d Mar 25 01:33:14.613399 polkitd[1546]: Finished loading, compiling and executing 2 rules Mar 25 01:33:14.616615 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 25 01:33:14.616339 dbus-daemon[1470]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Mar 25 01:33:14.622034 polkitd[1546]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Mar 25 01:33:14.629100 systemd[1]: Started polkit.service - Authorization Manager. Mar 25 01:33:14.712454 systemd-hostnamed[1515]: Hostname set to (transient) Mar 25 01:33:14.718189 systemd-resolved[1326]: System hostname changed to 'ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal'. Mar 25 01:33:14.760431 coreos-metadata[1561]: Mar 25 01:33:14.758 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Mar 25 01:33:14.761868 coreos-metadata[1561]: Mar 25 01:33:14.761 INFO Fetch failed with 404: resource not found Mar 25 01:33:14.761868 coreos-metadata[1561]: Mar 25 01:33:14.761 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Mar 25 01:33:14.762832 coreos-metadata[1561]: Mar 25 01:33:14.762 INFO Fetch successful Mar 25 01:33:14.762832 coreos-metadata[1561]: Mar 25 01:33:14.762 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Mar 25 01:33:14.763215 coreos-metadata[1561]: Mar 25 01:33:14.763 INFO Fetch failed with 404: resource not found Mar 25 01:33:14.763215 coreos-metadata[1561]: Mar 25 01:33:14.763 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Mar 25 01:33:14.767172 coreos-metadata[1561]: Mar 25 01:33:14.767 INFO Fetch failed with 404: resource not found Mar 25 01:33:14.767172 coreos-metadata[1561]: Mar 25 01:33:14.767 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Mar 25 01:33:14.767172 coreos-metadata[1561]: Mar 25 01:33:14.767 INFO Fetch successful Mar 25 01:33:14.779126 unknown[1561]: wrote ssh authorized keys file for user: core Mar 25 01:33:14.842328 update-ssh-keys[1577]: Updated "/home/core/.ssh/authorized_keys" Mar 25 01:33:14.846627 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 25 01:33:14.868228 systemd[1]: Finished sshkeys.service. Mar 25 01:33:14.989475 containerd[1510]: time="2025-03-25T01:33:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Mar 25 01:33:14.996011 containerd[1510]: time="2025-03-25T01:33:14.995398377Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 Mar 25 01:33:15.064449 containerd[1510]: time="2025-03-25T01:33:15.064372492Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.967µs" Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.064602496Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.064640208Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.064859460Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.064890052Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.064929169Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065007203Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065025159Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065376590Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065413402Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065431379Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065444806Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066280 containerd[1510]: time="2025-03-25T01:33:15.065554459Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066897 containerd[1510]: time="2025-03-25T01:33:15.065848489Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066897 containerd[1510]: time="2025-03-25T01:33:15.065893108Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Mar 25 01:33:15.066897 containerd[1510]: time="2025-03-25T01:33:15.065911734Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Mar 25 01:33:15.066897 containerd[1510]: time="2025-03-25T01:33:15.065960803Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Mar 25 01:33:15.072428 containerd[1510]: time="2025-03-25T01:33:15.072059919Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Mar 25 01:33:15.072428 containerd[1510]: time="2025-03-25T01:33:15.072196684Z" level=info msg="metadata content store policy set" policy=shared Mar 25 01:33:15.084851 containerd[1510]: time="2025-03-25T01:33:15.084795246Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Mar 25 01:33:15.087279 containerd[1510]: time="2025-03-25T01:33:15.085046539Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Mar 25 01:33:15.087279 containerd[1510]: time="2025-03-25T01:33:15.085420428Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Mar 25 01:33:15.087279 containerd[1510]: time="2025-03-25T01:33:15.085465031Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Mar 25 01:33:15.087279 containerd[1510]: time="2025-03-25T01:33:15.085488305Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Mar 25 01:33:15.087279 containerd[1510]: time="2025-03-25T01:33:15.085507880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.087598192Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.087659479Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.087682126Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.087702285Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.087741581Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.087766522Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Mar 25 01:33:15.088066 containerd[1510]: time="2025-03-25T01:33:15.088010185Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.088043645Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.088481668Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.088985151Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089038293Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089062245Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089084662Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089105629Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089142908Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089170721Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089190801Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089312930Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089339840Z" level=info msg="Start snapshots syncer" Mar 25 01:33:15.090667 containerd[1510]: time="2025-03-25T01:33:15.089388397Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Mar 25 01:33:15.096357 containerd[1510]: time="2025-03-25T01:33:15.095395960Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Mar 25 01:33:15.096357 containerd[1510]: time="2025-03-25T01:33:15.095584588Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.095758924Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096056204Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096093841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096133267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096152047Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096172325Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096204709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096279857Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Mar 25 01:33:15.096659 containerd[1510]: time="2025-03-25T01:33:15.096323571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.097064607Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.097103821Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098418332Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098475524Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098493923Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098532525Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098546719Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098563391Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Mar 25 01:33:15.100285 containerd[1510]: time="2025-03-25T01:33:15.098584068Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Mar 25 01:33:15.101257 containerd[1510]: time="2025-03-25T01:33:15.101208284Z" level=info msg="runtime interface created" Mar 25 01:33:15.101396 containerd[1510]: time="2025-03-25T01:33:15.101374086Z" level=info msg="created NRI interface" Mar 25 01:33:15.101514 containerd[1510]: time="2025-03-25T01:33:15.101494196Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Mar 25 01:33:15.101625 containerd[1510]: time="2025-03-25T01:33:15.101610134Z" level=info msg="Connect containerd service" Mar 25 01:33:15.101769 containerd[1510]: time="2025-03-25T01:33:15.101741853Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 25 01:33:15.109957 containerd[1510]: time="2025-03-25T01:33:15.109549830Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609575203Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609688208Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609730513Z" level=info msg="Start subscribing containerd event" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609813195Z" level=info msg="Start recovering state" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609941277Z" level=info msg="Start event monitor" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609965196Z" level=info msg="Start cni network conf syncer for default" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.609985406Z" level=info msg="Start streaming server" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.610009234Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.610024261Z" level=info msg="runtime interface starting up..." Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.610042084Z" level=info msg="starting plugins..." Mar 25 01:33:15.610233 containerd[1510]: time="2025-03-25T01:33:15.610063914Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Mar 25 01:33:15.616896 containerd[1510]: time="2025-03-25T01:33:15.612430140Z" level=info msg="containerd successfully booted in 0.627619s" Mar 25 01:33:15.613438 systemd[1]: Started containerd.service - containerd container runtime. Mar 25 01:33:15.805010 tar[1495]: linux-amd64/LICENSE Mar 25 01:33:15.805651 tar[1495]: linux-amd64/README.md Mar 25 01:33:15.852166 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Mar 25 01:33:15.934486 sshd_keygen[1488]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 25 01:33:15.989231 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 25 01:33:16.005623 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 25 01:33:16.007270 instance-setup[1563]: INFO Running google_set_multiqueue. Mar 25 01:33:16.035879 systemd[1]: issuegen.service: Deactivated successfully. Mar 25 01:33:16.036205 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 25 01:33:16.036813 instance-setup[1563]: INFO Set channels for eth0 to 2. Mar 25 01:33:16.042876 instance-setup[1563]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Mar 25 01:33:16.045403 instance-setup[1563]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Mar 25 01:33:16.045479 instance-setup[1563]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Mar 25 01:33:16.047188 instance-setup[1563]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Mar 25 01:33:16.048006 instance-setup[1563]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Mar 25 01:33:16.051575 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 25 01:33:16.050396 instance-setup[1563]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Mar 25 01:33:16.050461 instance-setup[1563]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Mar 25 01:33:16.055466 instance-setup[1563]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Mar 25 01:33:16.067204 instance-setup[1563]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 25 01:33:16.072714 instance-setup[1563]: INFO /usr/sbin/google_set_multiqueue: line 133: echo: write error: Value too large for defined data type Mar 25 01:33:16.079418 instance-setup[1563]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Mar 25 01:33:16.080053 instance-setup[1563]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Mar 25 01:33:16.089867 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 25 01:33:16.111467 init.sh[1555]: + /usr/bin/google_metadata_script_runner --script-type startup Mar 25 01:33:16.110495 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 25 01:33:16.121632 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 25 01:33:16.122127 systemd[1]: Reached target getty.target - Login Prompts. Mar 25 01:33:16.282527 startup-script[1642]: INFO Starting startup scripts. Mar 25 01:33:16.288771 startup-script[1642]: INFO No startup scripts found in metadata. Mar 25 01:33:16.288867 startup-script[1642]: INFO Finished running startup scripts. Mar 25 01:33:16.312276 init.sh[1555]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Mar 25 01:33:16.312276 init.sh[1555]: + daemon_pids=() Mar 25 01:33:16.312276 init.sh[1555]: + for d in accounts clock_skew network Mar 25 01:33:16.312276 init.sh[1555]: + daemon_pids+=($!) Mar 25 01:33:16.312276 init.sh[1555]: + for d in accounts clock_skew network Mar 25 01:33:16.312595 init.sh[1647]: + /usr/bin/google_clock_skew_daemon Mar 25 01:33:16.312974 init.sh[1555]: + daemon_pids+=($!) Mar 25 01:33:16.312974 init.sh[1555]: + for d in accounts clock_skew network Mar 25 01:33:16.313929 init.sh[1555]: + daemon_pids+=($!) Mar 25 01:33:16.313929 init.sh[1555]: + NOTIFY_SOCKET=/run/systemd/notify Mar 25 01:33:16.313929 init.sh[1555]: + /usr/bin/systemd-notify --ready Mar 25 01:33:16.314122 init.sh[1646]: + /usr/bin/google_accounts_daemon Mar 25 01:33:16.314457 init.sh[1648]: + /usr/bin/google_network_daemon Mar 25 01:33:16.332073 systemd[1]: Started oem-gce.service - GCE Linux Agent. Mar 25 01:33:16.345456 init.sh[1555]: + wait -n 1646 1647 1648 Mar 25 01:33:16.627784 google-clock-skew[1647]: INFO Starting Google Clock Skew daemon. Mar 25 01:33:16.654601 google-clock-skew[1647]: INFO Clock drift token has changed: 0. Mar 25 01:33:16.698587 google-networking[1648]: INFO Starting Google Networking daemon. Mar 25 01:33:16.729431 groupadd[1657]: group added to /etc/group: name=google-sudoers, GID=1000 Mar 25 01:33:16.732448 groupadd[1657]: group added to /etc/gshadow: name=google-sudoers Mar 25 01:33:16.782797 groupadd[1657]: new group: name=google-sudoers, GID=1000 Mar 25 01:33:16.812785 google-accounts[1646]: INFO Starting Google Accounts daemon. Mar 25 01:33:16.827965 google-accounts[1646]: WARNING OS Login not installed. Mar 25 01:33:16.829442 google-accounts[1646]: INFO Creating a new user account for 0. Mar 25 01:33:16.834746 init.sh[1670]: useradd: invalid user name '0': use --badname to ignore Mar 25 01:33:16.835309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:16.835660 google-accounts[1646]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Mar 25 01:33:16.855658 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 25 01:33:16.856390 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:33:16.865634 systemd[1]: Startup finished in 1.033s (kernel) + 9.616s (initrd) + 9.553s (userspace) = 20.203s. Mar 25 01:33:16.905445 ntpd[1476]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:6a%2]:123 Mar 25 01:33:16.905902 ntpd[1476]: 25 Mar 01:33:16 ntpd[1476]: Listen normally on 6 eth0 [fe80::4001:aff:fe80:6a%2]:123 Mar 25 01:33:17.000778 systemd-resolved[1326]: Clock change detected. Flushing caches. Mar 25 01:33:17.001203 google-clock-skew[1647]: INFO Synced system time with hardware clock. Mar 25 01:33:17.526219 kubelet[1671]: E0325 01:33:17.526149 1671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:33:17.528178 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:33:17.528433 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:33:17.528923 systemd[1]: kubelet.service: Consumed 1.240s CPU time, 246M memory peak. Mar 25 01:33:18.130942 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 25 01:33:18.132809 systemd[1]: Started sshd@0-10.128.0.106:22-139.178.89.65:59188.service - OpenSSH per-connection server daemon (139.178.89.65:59188). Mar 25 01:33:18.444017 sshd[1686]: Accepted publickey for core from 139.178.89.65 port 59188 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:18.447621 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:18.456753 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 25 01:33:18.458335 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 25 01:33:18.472245 systemd-logind[1481]: New session 1 of user core. Mar 25 01:33:18.490657 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 25 01:33:18.495278 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 25 01:33:18.519414 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 25 01:33:18.522851 systemd-logind[1481]: New session c1 of user core. Mar 25 01:33:18.710284 systemd[1690]: Queued start job for default target default.target. Mar 25 01:33:18.717856 systemd[1690]: Created slice app.slice - User Application Slice. Mar 25 01:33:18.717904 systemd[1690]: Reached target paths.target - Paths. Mar 25 01:33:18.718105 systemd[1690]: Reached target timers.target - Timers. Mar 25 01:33:18.719895 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 25 01:33:18.734187 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 25 01:33:18.734413 systemd[1690]: Reached target sockets.target - Sockets. Mar 25 01:33:18.734502 systemd[1690]: Reached target basic.target - Basic System. Mar 25 01:33:18.734579 systemd[1690]: Reached target default.target - Main User Target. Mar 25 01:33:18.734635 systemd[1690]: Startup finished in 202ms. Mar 25 01:33:18.734774 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 25 01:33:18.750553 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 25 01:33:18.982478 systemd[1]: Started sshd@1-10.128.0.106:22-139.178.89.65:59190.service - OpenSSH per-connection server daemon (139.178.89.65:59190). Mar 25 01:33:19.299775 sshd[1701]: Accepted publickey for core from 139.178.89.65 port 59190 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:19.301844 sshd-session[1701]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:19.309817 systemd-logind[1481]: New session 2 of user core. Mar 25 01:33:19.319671 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 25 01:33:19.518085 sshd[1703]: Connection closed by 139.178.89.65 port 59190 Mar 25 01:33:19.519362 sshd-session[1701]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:19.525093 systemd[1]: sshd@1-10.128.0.106:22-139.178.89.65:59190.service: Deactivated successfully. Mar 25 01:33:19.528091 systemd[1]: session-2.scope: Deactivated successfully. Mar 25 01:33:19.530661 systemd-logind[1481]: Session 2 logged out. Waiting for processes to exit. Mar 25 01:33:19.532482 systemd-logind[1481]: Removed session 2. Mar 25 01:33:19.573957 systemd[1]: Started sshd@2-10.128.0.106:22-139.178.89.65:59192.service - OpenSSH per-connection server daemon (139.178.89.65:59192). Mar 25 01:33:19.878491 sshd[1709]: Accepted publickey for core from 139.178.89.65 port 59192 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:19.880597 sshd-session[1709]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:19.887424 systemd-logind[1481]: New session 3 of user core. Mar 25 01:33:19.898673 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 25 01:33:20.088699 sshd[1711]: Connection closed by 139.178.89.65 port 59192 Mar 25 01:33:20.089828 sshd-session[1709]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:20.095219 systemd[1]: sshd@2-10.128.0.106:22-139.178.89.65:59192.service: Deactivated successfully. Mar 25 01:33:20.098040 systemd[1]: session-3.scope: Deactivated successfully. Mar 25 01:33:20.100186 systemd-logind[1481]: Session 3 logged out. Waiting for processes to exit. Mar 25 01:33:20.101757 systemd-logind[1481]: Removed session 3. Mar 25 01:33:20.145565 systemd[1]: Started sshd@3-10.128.0.106:22-139.178.89.65:59198.service - OpenSSH per-connection server daemon (139.178.89.65:59198). Mar 25 01:33:20.447051 sshd[1717]: Accepted publickey for core from 139.178.89.65 port 59198 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:20.449235 sshd-session[1717]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:20.456114 systemd-logind[1481]: New session 4 of user core. Mar 25 01:33:20.466639 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 25 01:33:20.661156 sshd[1719]: Connection closed by 139.178.89.65 port 59198 Mar 25 01:33:20.662515 sshd-session[1717]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:20.667962 systemd[1]: sshd@3-10.128.0.106:22-139.178.89.65:59198.service: Deactivated successfully. Mar 25 01:33:20.670963 systemd[1]: session-4.scope: Deactivated successfully. Mar 25 01:33:20.673085 systemd-logind[1481]: Session 4 logged out. Waiting for processes to exit. Mar 25 01:33:20.674907 systemd-logind[1481]: Removed session 4. Mar 25 01:33:20.715620 systemd[1]: Started sshd@4-10.128.0.106:22-139.178.89.65:59200.service - OpenSSH per-connection server daemon (139.178.89.65:59200). Mar 25 01:33:21.016260 sshd[1725]: Accepted publickey for core from 139.178.89.65 port 59200 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:21.018551 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:21.026541 systemd-logind[1481]: New session 5 of user core. Mar 25 01:33:21.036689 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 25 01:33:21.213685 sudo[1728]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 25 01:33:21.214242 sudo[1728]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:21.230087 sudo[1728]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:21.273067 sshd[1727]: Connection closed by 139.178.89.65 port 59200 Mar 25 01:33:21.274702 sshd-session[1725]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:21.281048 systemd[1]: sshd@4-10.128.0.106:22-139.178.89.65:59200.service: Deactivated successfully. Mar 25 01:33:21.284060 systemd[1]: session-5.scope: Deactivated successfully. Mar 25 01:33:21.286569 systemd-logind[1481]: Session 5 logged out. Waiting for processes to exit. Mar 25 01:33:21.288335 systemd-logind[1481]: Removed session 5. Mar 25 01:33:21.332786 systemd[1]: Started sshd@5-10.128.0.106:22-139.178.89.65:59208.service - OpenSSH per-connection server daemon (139.178.89.65:59208). Mar 25 01:33:21.642746 sshd[1734]: Accepted publickey for core from 139.178.89.65 port 59208 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:21.645387 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:21.653462 systemd-logind[1481]: New session 6 of user core. Mar 25 01:33:21.663698 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 25 01:33:21.825906 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 25 01:33:21.826481 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:21.832771 sudo[1738]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:21.848743 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 25 01:33:21.849271 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:21.863727 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 25 01:33:21.916668 augenrules[1760]: No rules Mar 25 01:33:21.919165 systemd[1]: audit-rules.service: Deactivated successfully. Mar 25 01:33:21.919593 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 25 01:33:21.921863 sudo[1737]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:21.965213 sshd[1736]: Connection closed by 139.178.89.65 port 59208 Mar 25 01:33:21.966351 sshd-session[1734]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:21.971654 systemd[1]: sshd@5-10.128.0.106:22-139.178.89.65:59208.service: Deactivated successfully. Mar 25 01:33:21.974537 systemd[1]: session-6.scope: Deactivated successfully. Mar 25 01:33:21.976993 systemd-logind[1481]: Session 6 logged out. Waiting for processes to exit. Mar 25 01:33:21.978735 systemd-logind[1481]: Removed session 6. Mar 25 01:33:22.019555 systemd[1]: Started sshd@6-10.128.0.106:22-139.178.89.65:59218.service - OpenSSH per-connection server daemon (139.178.89.65:59218). Mar 25 01:33:22.321849 sshd[1769]: Accepted publickey for core from 139.178.89.65 port 59218 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:33:22.323731 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:33:22.331195 systemd-logind[1481]: New session 7 of user core. Mar 25 01:33:22.339536 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 25 01:33:22.500157 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 25 01:33:22.500725 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 25 01:33:23.011808 systemd[1]: Starting docker.service - Docker Application Container Engine... Mar 25 01:33:23.027930 (dockerd)[1790]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Mar 25 01:33:23.426431 dockerd[1790]: time="2025-03-25T01:33:23.424240669Z" level=info msg="Starting up" Mar 25 01:33:23.435281 dockerd[1790]: time="2025-03-25T01:33:23.434692820Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Mar 25 01:33:23.568594 dockerd[1790]: time="2025-03-25T01:33:23.568540656Z" level=info msg="Loading containers: start." Mar 25 01:33:23.796368 kernel: Initializing XFRM netlink socket Mar 25 01:33:23.893420 systemd-networkd[1399]: docker0: Link UP Mar 25 01:33:23.960171 dockerd[1790]: time="2025-03-25T01:33:23.960107360Z" level=info msg="Loading containers: done." Mar 25 01:33:23.980254 dockerd[1790]: time="2025-03-25T01:33:23.980136555Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Mar 25 01:33:23.980506 dockerd[1790]: time="2025-03-25T01:33:23.980258660Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 Mar 25 01:33:23.980506 dockerd[1790]: time="2025-03-25T01:33:23.980433037Z" level=info msg="Daemon has completed initialization" Mar 25 01:33:24.024305 dockerd[1790]: time="2025-03-25T01:33:24.024205838Z" level=info msg="API listen on /run/docker.sock" Mar 25 01:33:24.024892 systemd[1]: Started docker.service - Docker Application Container Engine. Mar 25 01:33:25.013437 containerd[1510]: time="2025-03-25T01:33:25.013381820Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\"" Mar 25 01:33:25.496646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3811637254.mount: Deactivated successfully. Mar 25 01:33:27.391828 containerd[1510]: time="2025-03-25T01:33:27.391714883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:27.393217 containerd[1510]: time="2025-03-25T01:33:27.393115862Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.11: active requests=0, bytes read=32681201" Mar 25 01:33:27.394947 containerd[1510]: time="2025-03-25T01:33:27.394868084Z" level=info msg="ImageCreate event name:\"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:27.400314 containerd[1510]: time="2025-03-25T01:33:27.398862328Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:27.400497 containerd[1510]: time="2025-03-25T01:33:27.400278382Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.11\" with image id \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:77c54346965036acc7ac95c3200597ede36db9246179248dde21c1a3ecc1caf0\", size \"32671373\" in 2.386845589s" Mar 25 01:33:27.400625 containerd[1510]: time="2025-03-25T01:33:27.400598795Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.11\" returns image reference \"sha256:4db5a05c271eac8f5da2f95895ea1ccb9a38f48db3135ba3bdfe35941a396ea8\"" Mar 25 01:33:27.426763 containerd[1510]: time="2025-03-25T01:33:27.426714505Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\"" Mar 25 01:33:27.779127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 25 01:33:27.782119 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:28.052516 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:28.063903 (kubelet)[2066]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:33:28.142677 kubelet[2066]: E0325 01:33:28.142506 2066 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:33:28.147225 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:33:28.147525 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:33:28.148039 systemd[1]: kubelet.service: Consumed 222ms CPU time, 95.8M memory peak. Mar 25 01:33:29.183128 containerd[1510]: time="2025-03-25T01:33:29.183058977Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:29.184777 containerd[1510]: time="2025-03-25T01:33:29.184735437Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.11: active requests=0, bytes read=29621706" Mar 25 01:33:29.186450 containerd[1510]: time="2025-03-25T01:33:29.186408889Z" level=info msg="ImageCreate event name:\"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:29.189922 containerd[1510]: time="2025-03-25T01:33:29.189822962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:29.191415 containerd[1510]: time="2025-03-25T01:33:29.191218351Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.11\" with image id \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d8874f3fb45591ecdac67a3035c730808f18b3ab13147495c7d77eb1960d4f6f\", size \"31107380\" in 1.764451395s" Mar 25 01:33:29.191415 containerd[1510]: time="2025-03-25T01:33:29.191268381Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.11\" returns image reference \"sha256:de1025c2d496829d3250130380737609ffcdd10a4dce6f2dcd03f23a85a15e6a\"" Mar 25 01:33:29.217095 containerd[1510]: time="2025-03-25T01:33:29.216789537Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\"" Mar 25 01:33:30.524243 containerd[1510]: time="2025-03-25T01:33:30.524173486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:30.525588 containerd[1510]: time="2025-03-25T01:33:30.525533754Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.11: active requests=0, bytes read=17905225" Mar 25 01:33:30.527173 containerd[1510]: time="2025-03-25T01:33:30.527100938Z" level=info msg="ImageCreate event name:\"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:30.535324 containerd[1510]: time="2025-03-25T01:33:30.534468661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:30.537851 containerd[1510]: time="2025-03-25T01:33:30.537797083Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.11\" with image id \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c699f8c97ae7ec819c8bd878d3db104ba72fc440d810d9030e09286b696017b5\", size \"19390935\" in 1.320955829s" Mar 25 01:33:30.537851 containerd[1510]: time="2025-03-25T01:33:30.537842204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.11\" returns image reference \"sha256:11492f0faf138e933cadd6f533f03e401da9a35e53711e833f18afa6b185b2b7\"" Mar 25 01:33:30.563722 containerd[1510]: time="2025-03-25T01:33:30.563667819Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 25 01:33:31.668909 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1776973680.mount: Deactivated successfully. Mar 25 01:33:32.251723 containerd[1510]: time="2025-03-25T01:33:32.251652103Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:32.253107 containerd[1510]: time="2025-03-25T01:33:32.253023891Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29187267" Mar 25 01:33:32.254543 containerd[1510]: time="2025-03-25T01:33:32.254474215Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:32.259321 containerd[1510]: time="2025-03-25T01:33:32.257425326Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:32.260573 containerd[1510]: time="2025-03-25T01:33:32.260531515Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 1.696811675s" Mar 25 01:33:32.260787 containerd[1510]: time="2025-03-25T01:33:32.260760043Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 25 01:33:32.287809 containerd[1510]: time="2025-03-25T01:33:32.287748029Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Mar 25 01:33:32.796146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2361516185.mount: Deactivated successfully. Mar 25 01:33:33.847879 containerd[1510]: time="2025-03-25T01:33:33.847811501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:33.849394 containerd[1510]: time="2025-03-25T01:33:33.849301399Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18192419" Mar 25 01:33:33.851627 containerd[1510]: time="2025-03-25T01:33:33.851017778Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:33.858093 containerd[1510]: time="2025-03-25T01:33:33.858045056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:33.861498 containerd[1510]: time="2025-03-25T01:33:33.861438395Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.573637767s" Mar 25 01:33:33.861498 containerd[1510]: time="2025-03-25T01:33:33.861492560Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Mar 25 01:33:33.886691 containerd[1510]: time="2025-03-25T01:33:33.886644152Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Mar 25 01:33:34.292055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3114048048.mount: Deactivated successfully. Mar 25 01:33:34.297898 containerd[1510]: time="2025-03-25T01:33:34.297836003Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:34.299040 containerd[1510]: time="2025-03-25T01:33:34.298963204Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=324188" Mar 25 01:33:34.300277 containerd[1510]: time="2025-03-25T01:33:34.300199646Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:34.303059 containerd[1510]: time="2025-03-25T01:33:34.302991120Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:34.304333 containerd[1510]: time="2025-03-25T01:33:34.303906530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 417.21137ms" Mar 25 01:33:34.304333 containerd[1510]: time="2025-03-25T01:33:34.303952134Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Mar 25 01:33:34.329527 containerd[1510]: time="2025-03-25T01:33:34.329380484Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Mar 25 01:33:34.796783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3958443050.mount: Deactivated successfully. Mar 25 01:33:37.047309 containerd[1510]: time="2025-03-25T01:33:37.047214790Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:37.048963 containerd[1510]: time="2025-03-25T01:33:37.048880849Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57246061" Mar 25 01:33:37.050088 containerd[1510]: time="2025-03-25T01:33:37.050016698Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:37.053412 containerd[1510]: time="2025-03-25T01:33:37.053343926Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:33:37.054764 containerd[1510]: time="2025-03-25T01:33:37.054717121Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.725180008s" Mar 25 01:33:37.054887 containerd[1510]: time="2025-03-25T01:33:37.054772767Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Mar 25 01:33:38.290007 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Mar 25 01:33:38.295582 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:38.567175 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:38.578892 (kubelet)[2309]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 25 01:33:38.663341 kubelet[2309]: E0325 01:33:38.663253 2309 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 25 01:33:38.667111 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 25 01:33:38.667383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 25 01:33:38.668805 systemd[1]: kubelet.service: Consumed 230ms CPU time, 96.1M memory peak. Mar 25 01:33:40.117031 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:40.117354 systemd[1]: kubelet.service: Consumed 230ms CPU time, 96.1M memory peak. Mar 25 01:33:40.120703 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:40.152404 systemd[1]: Reload requested from client PID 2323 ('systemctl') (unit session-7.scope)... Mar 25 01:33:40.152833 systemd[1]: Reloading... Mar 25 01:33:40.325424 zram_generator::config[2364]: No configuration found. Mar 25 01:33:40.504415 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:33:40.656696 systemd[1]: Reloading finished in 502 ms. Mar 25 01:33:40.740868 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:40.745179 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:33:40.745493 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:40.745573 systemd[1]: kubelet.service: Consumed 150ms CPU time, 83.6M memory peak. Mar 25 01:33:40.747935 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:41.542805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:41.554919 (kubelet)[2421]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:33:41.620673 kubelet[2421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:33:41.621616 kubelet[2421]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:33:41.621616 kubelet[2421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:33:41.621616 kubelet[2421]: I0325 01:33:41.621386 2421 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:33:42.296350 kubelet[2421]: I0325 01:33:42.295888 2421 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:33:42.296350 kubelet[2421]: I0325 01:33:42.295924 2421 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:33:42.296350 kubelet[2421]: I0325 01:33:42.296169 2421 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:33:42.321122 kubelet[2421]: I0325 01:33:42.320387 2421 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:33:42.321547 kubelet[2421]: E0325 01:33:42.321456 2421 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.106:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.344420 kubelet[2421]: I0325 01:33:42.344381 2421 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:33:42.346937 kubelet[2421]: I0325 01:33:42.346843 2421 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:33:42.347263 kubelet[2421]: I0325 01:33:42.346914 2421 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:33:42.348327 kubelet[2421]: I0325 01:33:42.348249 2421 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:33:42.348327 kubelet[2421]: I0325 01:33:42.348321 2421 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:33:42.348574 kubelet[2421]: I0325 01:33:42.348539 2421 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:33:42.349910 kubelet[2421]: I0325 01:33:42.349870 2421 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:33:42.349910 kubelet[2421]: I0325 01:33:42.349906 2421 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:33:42.350098 kubelet[2421]: I0325 01:33:42.349945 2421 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:33:42.350098 kubelet[2421]: I0325 01:33:42.349976 2421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:33:42.357812 kubelet[2421]: W0325 01:33:42.357452 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.357812 kubelet[2421]: E0325 01:33:42.357534 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.357812 kubelet[2421]: W0325 01:33:42.357626 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.357812 kubelet[2421]: E0325 01:33:42.357676 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.358650 kubelet[2421]: I0325 01:33:42.358593 2421 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:33:42.361440 kubelet[2421]: I0325 01:33:42.361404 2421 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:33:42.361561 kubelet[2421]: W0325 01:33:42.361502 2421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 25 01:33:42.364123 kubelet[2421]: I0325 01:33:42.364000 2421 server.go:1264] "Started kubelet" Mar 25 01:33:42.364640 kubelet[2421]: I0325 01:33:42.364590 2421 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:33:42.366390 kubelet[2421]: I0325 01:33:42.365846 2421 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:33:42.369990 kubelet[2421]: I0325 01:33:42.369934 2421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:33:42.371879 kubelet[2421]: I0325 01:33:42.371798 2421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:33:42.373379 kubelet[2421]: I0325 01:33:42.372108 2421 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:33:42.373379 kubelet[2421]: E0325 01:33:42.372374 2421 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.106:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.106:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal.182fe7c6ebf5cb16 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,UID:ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,},FirstTimestamp:2025-03-25 01:33:42.36396623 +0000 UTC m=+0.803598104,LastTimestamp:2025-03-25 01:33:42.36396623 +0000 UTC m=+0.803598104,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,}" Mar 25 01:33:42.379395 kubelet[2421]: E0325 01:33:42.379343 2421 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" not found" Mar 25 01:33:42.379517 kubelet[2421]: I0325 01:33:42.379439 2421 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:33:42.379594 kubelet[2421]: I0325 01:33:42.379575 2421 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:33:42.379780 kubelet[2421]: I0325 01:33:42.379660 2421 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:33:42.380198 kubelet[2421]: W0325 01:33:42.380128 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.380340 kubelet[2421]: E0325 01:33:42.380210 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.382830 kubelet[2421]: E0325 01:33:42.382571 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.106:6443: connect: connection refused" interval="200ms" Mar 25 01:33:42.383303 kubelet[2421]: I0325 01:33:42.383217 2421 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:33:42.383601 kubelet[2421]: I0325 01:33:42.383531 2421 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:33:42.385970 kubelet[2421]: I0325 01:33:42.385863 2421 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:33:42.415584 kubelet[2421]: I0325 01:33:42.415428 2421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:33:42.419991 kubelet[2421]: I0325 01:33:42.419472 2421 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:33:42.420367 kubelet[2421]: I0325 01:33:42.420265 2421 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:33:42.420367 kubelet[2421]: I0325 01:33:42.420361 2421 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:33:42.420541 kubelet[2421]: E0325 01:33:42.420429 2421 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:33:42.422331 kubelet[2421]: W0325 01:33:42.422110 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.422331 kubelet[2421]: E0325 01:33:42.422186 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:42.425425 kubelet[2421]: I0325 01:33:42.425242 2421 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:33:42.425425 kubelet[2421]: I0325 01:33:42.425263 2421 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:33:42.425425 kubelet[2421]: I0325 01:33:42.425326 2421 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:33:42.428736 kubelet[2421]: I0325 01:33:42.428408 2421 policy_none.go:49] "None policy: Start" Mar 25 01:33:42.429514 kubelet[2421]: I0325 01:33:42.429438 2421 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:33:42.429514 kubelet[2421]: I0325 01:33:42.429470 2421 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:33:42.438706 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Mar 25 01:33:42.456894 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Mar 25 01:33:42.461680 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Mar 25 01:33:42.476123 kubelet[2421]: I0325 01:33:42.475499 2421 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:33:42.476123 kubelet[2421]: I0325 01:33:42.475768 2421 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:33:42.476123 kubelet[2421]: I0325 01:33:42.475945 2421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:33:42.478319 kubelet[2421]: E0325 01:33:42.478271 2421 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" not found" Mar 25 01:33:42.485278 kubelet[2421]: I0325 01:33:42.485241 2421 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.485756 kubelet[2421]: E0325 01:33:42.485719 2421 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.106:6443/api/v1/nodes\": dial tcp 10.128.0.106:6443: connect: connection refused" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.521144 kubelet[2421]: I0325 01:33:42.521052 2421 topology_manager.go:215] "Topology Admit Handler" podUID="07bf6aeb4199e6a7a807c6290527e728" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.536651 kubelet[2421]: I0325 01:33:42.536581 2421 topology_manager.go:215] "Topology Admit Handler" podUID="4600af026f3357769f0fe11919866d56" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.561519 kubelet[2421]: I0325 01:33:42.561061 2421 topology_manager.go:215] "Topology Admit Handler" podUID="e7a30bf92232e847c26a935d99b1b88c" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.570551 systemd[1]: Created slice kubepods-burstable-pod07bf6aeb4199e6a7a807c6290527e728.slice - libcontainer container kubepods-burstable-pod07bf6aeb4199e6a7a807c6290527e728.slice. Mar 25 01:33:42.583815 kubelet[2421]: E0325 01:33:42.583741 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.106:6443: connect: connection refused" interval="400ms" Mar 25 01:33:42.588579 systemd[1]: Created slice kubepods-burstable-pod4600af026f3357769f0fe11919866d56.slice - libcontainer container kubepods-burstable-pod4600af026f3357769f0fe11919866d56.slice. Mar 25 01:33:42.600940 systemd[1]: Created slice kubepods-burstable-pode7a30bf92232e847c26a935d99b1b88c.slice - libcontainer container kubepods-burstable-pode7a30bf92232e847c26a935d99b1b88c.slice. Mar 25 01:33:42.680876 kubelet[2421]: I0325 01:33:42.680789 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7a30bf92232e847c26a935d99b1b88c-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"e7a30bf92232e847c26a935d99b1b88c\") " pod="kube-system/kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.680876 kubelet[2421]: I0325 01:33:42.680876 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07bf6aeb4199e6a7a807c6290527e728-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"07bf6aeb4199e6a7a807c6290527e728\") " pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681608 kubelet[2421]: I0325 01:33:42.680911 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681608 kubelet[2421]: I0325 01:33:42.680941 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681608 kubelet[2421]: I0325 01:33:42.680968 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681608 kubelet[2421]: I0325 01:33:42.680995 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681744 kubelet[2421]: I0325 01:33:42.681022 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07bf6aeb4199e6a7a807c6290527e728-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"07bf6aeb4199e6a7a807c6290527e728\") " pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681744 kubelet[2421]: I0325 01:33:42.681052 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07bf6aeb4199e6a7a807c6290527e728-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"07bf6aeb4199e6a7a807c6290527e728\") " pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.681744 kubelet[2421]: I0325 01:33:42.681089 2421 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.690891 kubelet[2421]: I0325 01:33:42.690835 2421 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.691354 kubelet[2421]: E0325 01:33:42.691271 2421 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.106:6443/api/v1/nodes\": dial tcp 10.128.0.106:6443: connect: connection refused" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:42.886516 containerd[1510]: time="2025-03-25T01:33:42.886357373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,Uid:07bf6aeb4199e6a7a807c6290527e728,Namespace:kube-system,Attempt:0,}" Mar 25 01:33:42.898159 containerd[1510]: time="2025-03-25T01:33:42.898092884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,Uid:4600af026f3357769f0fe11919866d56,Namespace:kube-system,Attempt:0,}" Mar 25 01:33:42.905119 containerd[1510]: time="2025-03-25T01:33:42.904896517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,Uid:e7a30bf92232e847c26a935d99b1b88c,Namespace:kube-system,Attempt:0,}" Mar 25 01:33:42.985184 kubelet[2421]: E0325 01:33:42.985116 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.106:6443: connect: connection refused" interval="800ms" Mar 25 01:33:43.095950 kubelet[2421]: I0325 01:33:43.095901 2421 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:43.096418 kubelet[2421]: E0325 01:33:43.096366 2421 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.106:6443/api/v1/nodes\": dial tcp 10.128.0.106:6443: connect: connection refused" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:43.348880 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4210375964.mount: Deactivated successfully. Mar 25 01:33:43.358305 containerd[1510]: time="2025-03-25T01:33:43.358223951Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:43.363278 containerd[1510]: time="2025-03-25T01:33:43.363201315Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=322072" Mar 25 01:33:43.364385 containerd[1510]: time="2025-03-25T01:33:43.364325345Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:43.365736 containerd[1510]: time="2025-03-25T01:33:43.365650684Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:43.368211 containerd[1510]: time="2025-03-25T01:33:43.368162615Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:43.369523 containerd[1510]: time="2025-03-25T01:33:43.369457958Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:33:43.370873 containerd[1510]: time="2025-03-25T01:33:43.370687120Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Mar 25 01:33:43.373885 containerd[1510]: time="2025-03-25T01:33:43.372138989Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 25 01:33:43.373885 containerd[1510]: time="2025-03-25T01:33:43.373041565Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 472.199729ms" Mar 25 01:33:43.375836 containerd[1510]: time="2025-03-25T01:33:43.375780388Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 485.34078ms" Mar 25 01:33:43.383319 containerd[1510]: time="2025-03-25T01:33:43.382971723Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 473.674143ms" Mar 25 01:33:43.407182 containerd[1510]: time="2025-03-25T01:33:43.405462004Z" level=info msg="connecting to shim 8591c5a6b922a0683495d6b456d74a9aae8e048de785c1959f18ebc87ebf218c" address="unix:///run/containerd/s/439c72f7009f69a06c9a025fd3677e39821d7bb8cee5040f1abfbdaf27461efd" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:33:43.441948 kubelet[2421]: W0325 01:33:43.441797 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.441948 kubelet[2421]: E0325 01:33:43.441888 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.106:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.442813 containerd[1510]: time="2025-03-25T01:33:43.442754316Z" level=info msg="connecting to shim ecc95f45a42c7dccf1d1d9d2424da77a64a742ed1ae19cc5415936852e901718" address="unix:///run/containerd/s/3916495e365423f564bc6a236c9187d106db20c8bd4f13a603cfcb28842e1064" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:33:43.446690 containerd[1510]: time="2025-03-25T01:33:43.446639986Z" level=info msg="connecting to shim e83da202ee498b237b591ca9b911b4df3b8f962ce56f2a01b7ad7b459b7b45da" address="unix:///run/containerd/s/5685ed4002ac26818cb99767eb8a533f5afb5b414210c46f3570e99cee27a05c" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:33:43.489611 systemd[1]: Started cri-containerd-8591c5a6b922a0683495d6b456d74a9aae8e048de785c1959f18ebc87ebf218c.scope - libcontainer container 8591c5a6b922a0683495d6b456d74a9aae8e048de785c1959f18ebc87ebf218c. Mar 25 01:33:43.511851 systemd[1]: Started cri-containerd-ecc95f45a42c7dccf1d1d9d2424da77a64a742ed1ae19cc5415936852e901718.scope - libcontainer container ecc95f45a42c7dccf1d1d9d2424da77a64a742ed1ae19cc5415936852e901718. Mar 25 01:33:43.519936 systemd[1]: Started cri-containerd-e83da202ee498b237b591ca9b911b4df3b8f962ce56f2a01b7ad7b459b7b45da.scope - libcontainer container e83da202ee498b237b591ca9b911b4df3b8f962ce56f2a01b7ad7b459b7b45da. Mar 25 01:33:43.571391 kubelet[2421]: W0325 01:33:43.570144 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.571391 kubelet[2421]: E0325 01:33:43.570237 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.106:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.583862 kubelet[2421]: W0325 01:33:43.583538 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.583862 kubelet[2421]: E0325 01:33:43.583612 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.106:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.623333 containerd[1510]: time="2025-03-25T01:33:43.623158254Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,Uid:4600af026f3357769f0fe11919866d56,Namespace:kube-system,Attempt:0,} returns sandbox id \"8591c5a6b922a0683495d6b456d74a9aae8e048de785c1959f18ebc87ebf218c\"" Mar 25 01:33:43.629618 kubelet[2421]: E0325 01:33:43.629540 2421 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flat" Mar 25 01:33:43.635971 containerd[1510]: time="2025-03-25T01:33:43.634434114Z" level=info msg="CreateContainer within sandbox \"8591c5a6b922a0683495d6b456d74a9aae8e048de785c1959f18ebc87ebf218c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Mar 25 01:33:43.635971 containerd[1510]: time="2025-03-25T01:33:43.634959645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,Uid:07bf6aeb4199e6a7a807c6290527e728,Namespace:kube-system,Attempt:0,} returns sandbox id \"ecc95f45a42c7dccf1d1d9d2424da77a64a742ed1ae19cc5415936852e901718\"" Mar 25 01:33:43.637961 kubelet[2421]: E0325 01:33:43.637579 2421 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-21291" Mar 25 01:33:43.640550 containerd[1510]: time="2025-03-25T01:33:43.640495612Z" level=info msg="CreateContainer within sandbox \"ecc95f45a42c7dccf1d1d9d2424da77a64a742ed1ae19cc5415936852e901718\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Mar 25 01:33:43.645354 kubelet[2421]: W0325 01:33:43.645183 2421 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.645977 kubelet[2421]: E0325 01:33:43.645590 2421 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.106:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.106:6443: connect: connection refused Mar 25 01:33:43.659708 containerd[1510]: time="2025-03-25T01:33:43.659641368Z" level=info msg="Container 074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:43.664279 containerd[1510]: time="2025-03-25T01:33:43.664201839Z" level=info msg="Container 9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:43.668590 containerd[1510]: time="2025-03-25T01:33:43.668546868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal,Uid:e7a30bf92232e847c26a935d99b1b88c,Namespace:kube-system,Attempt:0,} returns sandbox id \"e83da202ee498b237b591ca9b911b4df3b8f962ce56f2a01b7ad7b459b7b45da\"" Mar 25 01:33:43.670582 kubelet[2421]: E0325 01:33:43.670543 2421 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-21291" Mar 25 01:33:43.673193 containerd[1510]: time="2025-03-25T01:33:43.673143777Z" level=info msg="CreateContainer within sandbox \"e83da202ee498b237b591ca9b911b4df3b8f962ce56f2a01b7ad7b459b7b45da\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Mar 25 01:33:43.676695 containerd[1510]: time="2025-03-25T01:33:43.676381302Z" level=info msg="CreateContainer within sandbox \"8591c5a6b922a0683495d6b456d74a9aae8e048de785c1959f18ebc87ebf218c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81\"" Mar 25 01:33:43.678784 containerd[1510]: time="2025-03-25T01:33:43.678737859Z" level=info msg="StartContainer for \"074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81\"" Mar 25 01:33:43.680882 containerd[1510]: time="2025-03-25T01:33:43.680711211Z" level=info msg="CreateContainer within sandbox \"ecc95f45a42c7dccf1d1d9d2424da77a64a742ed1ae19cc5415936852e901718\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c\"" Mar 25 01:33:43.681925 containerd[1510]: time="2025-03-25T01:33:43.681565104Z" level=info msg="connecting to shim 074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81" address="unix:///run/containerd/s/439c72f7009f69a06c9a025fd3677e39821d7bb8cee5040f1abfbdaf27461efd" protocol=ttrpc version=3 Mar 25 01:33:43.682213 containerd[1510]: time="2025-03-25T01:33:43.682181560Z" level=info msg="StartContainer for \"9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c\"" Mar 25 01:33:43.683914 containerd[1510]: time="2025-03-25T01:33:43.683869178Z" level=info msg="connecting to shim 9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c" address="unix:///run/containerd/s/3916495e365423f564bc6a236c9187d106db20c8bd4f13a603cfcb28842e1064" protocol=ttrpc version=3 Mar 25 01:33:43.695069 containerd[1510]: time="2025-03-25T01:33:43.695023341Z" level=info msg="Container d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:33:43.714181 containerd[1510]: time="2025-03-25T01:33:43.714126278Z" level=info msg="CreateContainer within sandbox \"e83da202ee498b237b591ca9b911b4df3b8f962ce56f2a01b7ad7b459b7b45da\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8\"" Mar 25 01:33:43.717528 containerd[1510]: time="2025-03-25T01:33:43.715142740Z" level=info msg="StartContainer for \"d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8\"" Mar 25 01:33:43.725099 containerd[1510]: time="2025-03-25T01:33:43.725054651Z" level=info msg="connecting to shim d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8" address="unix:///run/containerd/s/5685ed4002ac26818cb99767eb8a533f5afb5b414210c46f3570e99cee27a05c" protocol=ttrpc version=3 Mar 25 01:33:43.726161 systemd[1]: Started cri-containerd-074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81.scope - libcontainer container 074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81. Mar 25 01:33:43.746673 systemd[1]: Started cri-containerd-9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c.scope - libcontainer container 9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c. Mar 25 01:33:43.781522 systemd[1]: Started cri-containerd-d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8.scope - libcontainer container d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8. Mar 25 01:33:43.786375 kubelet[2421]: E0325 01:33:43.786281 2421 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.106:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.106:6443: connect: connection refused" interval="1.6s" Mar 25 01:33:43.864723 containerd[1510]: time="2025-03-25T01:33:43.864669342Z" level=info msg="StartContainer for \"074676fc6b0079107203b991fe395f8a0c4e7c317ff44dd30e8a13fcbf24cf81\" returns successfully" Mar 25 01:33:43.885050 containerd[1510]: time="2025-03-25T01:33:43.884419607Z" level=info msg="StartContainer for \"9ecc5af847c61aa970ca77c2538f55439ef64e85cb3cb41e1a5f1ad7be5faa5c\" returns successfully" Mar 25 01:33:43.902864 kubelet[2421]: I0325 01:33:43.902366 2421 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:43.902864 kubelet[2421]: E0325 01:33:43.902774 2421 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.106:6443/api/v1/nodes\": dial tcp 10.128.0.106:6443: connect: connection refused" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:43.988114 containerd[1510]: time="2025-03-25T01:33:43.988063873Z" level=info msg="StartContainer for \"d977fd34fffd63754e7b4bb5f157ac153e1998c1b4a66ce3f6699c5ae9612be8\" returns successfully" Mar 25 01:33:44.464657 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Mar 25 01:33:45.513103 kubelet[2421]: I0325 01:33:45.512768 2421 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:47.014321 kubelet[2421]: E0325 01:33:47.012609 2421 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" not found" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:47.113824 kubelet[2421]: I0325 01:33:47.113775 2421 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:47.361966 kubelet[2421]: I0325 01:33:47.361565 2421 apiserver.go:52] "Watching apiserver" Mar 25 01:33:47.380478 kubelet[2421]: I0325 01:33:47.380407 2421 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:33:49.020069 systemd[1]: Reload requested from client PID 2692 ('systemctl') (unit session-7.scope)... Mar 25 01:33:49.020096 systemd[1]: Reloading... Mar 25 01:33:49.113587 kubelet[2421]: W0325 01:33:49.112696 2421 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 25 01:33:49.282347 zram_generator::config[2740]: No configuration found. Mar 25 01:33:49.468928 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 25 01:33:49.652706 systemd[1]: Reloading finished in 631 ms. Mar 25 01:33:49.693586 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:49.704235 systemd[1]: kubelet.service: Deactivated successfully. Mar 25 01:33:49.704604 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:49.704686 systemd[1]: kubelet.service: Consumed 1.374s CPU time, 116.5M memory peak. Mar 25 01:33:49.709736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 25 01:33:49.992980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 25 01:33:50.006923 (kubelet)[2785]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 25 01:33:50.088617 kubelet[2785]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:33:50.088617 kubelet[2785]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 25 01:33:50.088617 kubelet[2785]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 25 01:33:50.091450 kubelet[2785]: I0325 01:33:50.088681 2785 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 25 01:33:50.096817 kubelet[2785]: I0325 01:33:50.096512 2785 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 25 01:33:50.096817 kubelet[2785]: I0325 01:33:50.096543 2785 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 25 01:33:50.098378 kubelet[2785]: I0325 01:33:50.097477 2785 server.go:927] "Client rotation is on, will bootstrap in background" Mar 25 01:33:50.100095 kubelet[2785]: I0325 01:33:50.100062 2785 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 25 01:33:50.102470 kubelet[2785]: I0325 01:33:50.102438 2785 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 25 01:33:50.120091 kubelet[2785]: I0325 01:33:50.120043 2785 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 25 01:33:50.120803 kubelet[2785]: I0325 01:33:50.120612 2785 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 25 01:33:50.121383 kubelet[2785]: I0325 01:33:50.120671 2785 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 25 01:33:50.121383 kubelet[2785]: I0325 01:33:50.121129 2785 topology_manager.go:138] "Creating topology manager with none policy" Mar 25 01:33:50.121383 kubelet[2785]: I0325 01:33:50.121148 2785 container_manager_linux.go:301] "Creating device plugin manager" Mar 25 01:33:50.121383 kubelet[2785]: I0325 01:33:50.121208 2785 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:33:50.121818 kubelet[2785]: I0325 01:33:50.121764 2785 kubelet.go:400] "Attempting to sync node with API server" Mar 25 01:33:50.122490 kubelet[2785]: I0325 01:33:50.122061 2785 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 25 01:33:50.122490 kubelet[2785]: I0325 01:33:50.122107 2785 kubelet.go:312] "Adding apiserver pod source" Mar 25 01:33:50.122490 kubelet[2785]: I0325 01:33:50.122138 2785 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 25 01:33:50.128523 kubelet[2785]: I0325 01:33:50.128494 2785 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" Mar 25 01:33:50.129043 kubelet[2785]: I0325 01:33:50.129020 2785 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 25 01:33:50.131506 kubelet[2785]: I0325 01:33:50.129869 2785 server.go:1264] "Started kubelet" Mar 25 01:33:50.135059 kubelet[2785]: I0325 01:33:50.134054 2785 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 25 01:33:50.135059 kubelet[2785]: I0325 01:33:50.134485 2785 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 25 01:33:50.135059 kubelet[2785]: I0325 01:33:50.134532 2785 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 25 01:33:50.137020 kubelet[2785]: I0325 01:33:50.135912 2785 server.go:455] "Adding debug handlers to kubelet server" Mar 25 01:33:50.137020 kubelet[2785]: I0325 01:33:50.136651 2785 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 25 01:33:50.150180 kubelet[2785]: I0325 01:33:50.150143 2785 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 25 01:33:50.150888 kubelet[2785]: I0325 01:33:50.150842 2785 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 25 01:33:50.151081 kubelet[2785]: I0325 01:33:50.151060 2785 reconciler.go:26] "Reconciler: start to sync state" Mar 25 01:33:50.169259 kubelet[2785]: I0325 01:33:50.167482 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 25 01:33:50.174864 kubelet[2785]: I0325 01:33:50.171549 2785 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 25 01:33:50.174864 kubelet[2785]: I0325 01:33:50.171593 2785 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 25 01:33:50.174864 kubelet[2785]: I0325 01:33:50.171618 2785 kubelet.go:2337] "Starting kubelet main sync loop" Mar 25 01:33:50.174864 kubelet[2785]: E0325 01:33:50.171696 2785 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 25 01:33:50.194043 kubelet[2785]: I0325 01:33:50.192617 2785 factory.go:221] Registration of the systemd container factory successfully Mar 25 01:33:50.194509 kubelet[2785]: I0325 01:33:50.194476 2785 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 25 01:33:50.206066 kubelet[2785]: E0325 01:33:50.206033 2785 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 25 01:33:50.206766 kubelet[2785]: I0325 01:33:50.206741 2785 factory.go:221] Registration of the containerd container factory successfully Mar 25 01:33:50.257894 kubelet[2785]: I0325 01:33:50.257416 2785 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.272509 kubelet[2785]: I0325 01:33:50.271165 2785 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.272509 kubelet[2785]: I0325 01:33:50.271257 2785 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.276321 kubelet[2785]: E0325 01:33:50.275187 2785 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 25 01:33:50.288617 kubelet[2785]: I0325 01:33:50.288322 2785 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 25 01:33:50.288617 kubelet[2785]: I0325 01:33:50.288344 2785 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 25 01:33:50.288617 kubelet[2785]: I0325 01:33:50.288369 2785 state_mem.go:36] "Initialized new in-memory state store" Mar 25 01:33:50.288985 kubelet[2785]: I0325 01:33:50.288958 2785 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 25 01:33:50.289070 kubelet[2785]: I0325 01:33:50.288986 2785 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 25 01:33:50.289070 kubelet[2785]: I0325 01:33:50.289014 2785 policy_none.go:49] "None policy: Start" Mar 25 01:33:50.290416 kubelet[2785]: I0325 01:33:50.289899 2785 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 25 01:33:50.290416 kubelet[2785]: I0325 01:33:50.289924 2785 state_mem.go:35] "Initializing new in-memory state store" Mar 25 01:33:50.290416 kubelet[2785]: I0325 01:33:50.290112 2785 state_mem.go:75] "Updated machine memory state" Mar 25 01:33:50.300552 kubelet[2785]: I0325 01:33:50.300510 2785 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 25 01:33:50.300782 kubelet[2785]: I0325 01:33:50.300730 2785 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 25 01:33:50.303670 kubelet[2785]: I0325 01:33:50.303648 2785 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 25 01:33:50.477466 kubelet[2785]: I0325 01:33:50.476272 2785 topology_manager.go:215] "Topology Admit Handler" podUID="4600af026f3357769f0fe11919866d56" podNamespace="kube-system" podName="kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.477466 kubelet[2785]: I0325 01:33:50.476432 2785 topology_manager.go:215] "Topology Admit Handler" podUID="e7a30bf92232e847c26a935d99b1b88c" podNamespace="kube-system" podName="kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.477466 kubelet[2785]: I0325 01:33:50.476517 2785 topology_manager.go:215] "Topology Admit Handler" podUID="07bf6aeb4199e6a7a807c6290527e728" podNamespace="kube-system" podName="kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.485513 kubelet[2785]: W0325 01:33:50.485134 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 25 01:33:50.488130 kubelet[2785]: W0325 01:33:50.488017 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 25 01:33:50.488553 kubelet[2785]: W0325 01:33:50.488367 2785 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Mar 25 01:33:50.488553 kubelet[2785]: E0325 01:33:50.488403 2785 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553175 kubelet[2785]: I0325 01:33:50.552569 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e7a30bf92232e847c26a935d99b1b88c-kubeconfig\") pod \"kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"e7a30bf92232e847c26a935d99b1b88c\") " pod="kube-system/kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553175 kubelet[2785]: I0325 01:33:50.552654 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/07bf6aeb4199e6a7a807c6290527e728-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"07bf6aeb4199e6a7a807c6290527e728\") " pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553175 kubelet[2785]: I0325 01:33:50.552691 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-ca-certs\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553175 kubelet[2785]: I0325 01:33:50.552748 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-k8s-certs\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553519 kubelet[2785]: I0325 01:33:50.552792 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553519 kubelet[2785]: I0325 01:33:50.552844 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/07bf6aeb4199e6a7a807c6290527e728-ca-certs\") pod \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"07bf6aeb4199e6a7a807c6290527e728\") " pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553519 kubelet[2785]: I0325 01:33:50.552910 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/07bf6aeb4199e6a7a807c6290527e728-k8s-certs\") pod \"kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"07bf6aeb4199e6a7a807c6290527e728\") " pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553519 kubelet[2785]: I0325 01:33:50.553017 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-flexvolume-dir\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:50.553732 kubelet[2785]: I0325 01:33:50.553056 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4600af026f3357769f0fe11919866d56-kubeconfig\") pod \"kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" (UID: \"4600af026f3357769f0fe11919866d56\") " pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:33:51.124323 kubelet[2785]: I0325 01:33:51.123455 2785 apiserver.go:52] "Watching apiserver" Mar 25 01:33:51.152211 kubelet[2785]: I0325 01:33:51.152046 2785 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 25 01:33:51.324223 kubelet[2785]: I0325 01:33:51.323925 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" podStartSLOduration=2.3239004100000002 podStartE2EDuration="2.32390041s" podCreationTimestamp="2025-03-25 01:33:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:33:51.29948401 +0000 UTC m=+1.285359137" watchObservedRunningTime="2025-03-25 01:33:51.32390041 +0000 UTC m=+1.309775543" Mar 25 01:33:51.361314 kubelet[2785]: I0325 01:33:51.360914 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" podStartSLOduration=1.360887156 podStartE2EDuration="1.360887156s" podCreationTimestamp="2025-03-25 01:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:33:51.324841848 +0000 UTC m=+1.310716971" watchObservedRunningTime="2025-03-25 01:33:51.360887156 +0000 UTC m=+1.346762287" Mar 25 01:33:51.384722 kubelet[2785]: I0325 01:33:51.384358 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" podStartSLOduration=1.384327239 podStartE2EDuration="1.384327239s" podCreationTimestamp="2025-03-25 01:33:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:33:51.361632416 +0000 UTC m=+1.347507546" watchObservedRunningTime="2025-03-25 01:33:51.384327239 +0000 UTC m=+1.370202368" Mar 25 01:33:55.932488 sudo[1772]: pam_unix(sudo:session): session closed for user root Mar 25 01:33:55.974716 sshd[1771]: Connection closed by 139.178.89.65 port 59218 Mar 25 01:33:55.975651 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Mar 25 01:33:55.980314 systemd[1]: sshd@6-10.128.0.106:22-139.178.89.65:59218.service: Deactivated successfully. Mar 25 01:33:55.983721 systemd[1]: session-7.scope: Deactivated successfully. Mar 25 01:33:55.984044 systemd[1]: session-7.scope: Consumed 6.028s CPU time, 249.6M memory peak. Mar 25 01:33:55.986996 systemd-logind[1481]: Session 7 logged out. Waiting for processes to exit. Mar 25 01:33:55.989156 systemd-logind[1481]: Removed session 7. Mar 25 01:33:58.379460 update_engine[1483]: I20250325 01:33:58.379361 1483 update_attempter.cc:509] Updating boot flags... Mar 25 01:33:58.447452 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2873) Mar 25 01:33:58.610389 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2873) Mar 25 01:33:58.752426 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2873) Mar 25 01:34:05.203355 kubelet[2785]: I0325 01:34:05.203260 2785 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Mar 25 01:34:05.205186 containerd[1510]: time="2025-03-25T01:34:05.205113573Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 25 01:34:05.205810 kubelet[2785]: I0325 01:34:05.205669 2785 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Mar 25 01:34:06.136733 kubelet[2785]: I0325 01:34:06.136663 2785 topology_manager.go:215] "Topology Admit Handler" podUID="268399ef-cddb-4c08-9a4b-1f1de33ec2d0" podNamespace="kube-system" podName="kube-proxy-w5m78" Mar 25 01:34:06.152553 systemd[1]: Created slice kubepods-besteffort-pod268399ef_cddb_4c08_9a4b_1f1de33ec2d0.slice - libcontainer container kubepods-besteffort-pod268399ef_cddb_4c08_9a4b_1f1de33ec2d0.slice. Mar 25 01:34:06.253342 kubelet[2785]: I0325 01:34:06.252795 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/268399ef-cddb-4c08-9a4b-1f1de33ec2d0-kube-proxy\") pod \"kube-proxy-w5m78\" (UID: \"268399ef-cddb-4c08-9a4b-1f1de33ec2d0\") " pod="kube-system/kube-proxy-w5m78" Mar 25 01:34:06.253342 kubelet[2785]: I0325 01:34:06.252858 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/268399ef-cddb-4c08-9a4b-1f1de33ec2d0-xtables-lock\") pod \"kube-proxy-w5m78\" (UID: \"268399ef-cddb-4c08-9a4b-1f1de33ec2d0\") " pod="kube-system/kube-proxy-w5m78" Mar 25 01:34:06.253342 kubelet[2785]: I0325 01:34:06.252894 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/268399ef-cddb-4c08-9a4b-1f1de33ec2d0-lib-modules\") pod \"kube-proxy-w5m78\" (UID: \"268399ef-cddb-4c08-9a4b-1f1de33ec2d0\") " pod="kube-system/kube-proxy-w5m78" Mar 25 01:34:06.253342 kubelet[2785]: I0325 01:34:06.252925 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8255\" (UniqueName: \"kubernetes.io/projected/268399ef-cddb-4c08-9a4b-1f1de33ec2d0-kube-api-access-x8255\") pod \"kube-proxy-w5m78\" (UID: \"268399ef-cddb-4c08-9a4b-1f1de33ec2d0\") " pod="kube-system/kube-proxy-w5m78" Mar 25 01:34:06.266555 kubelet[2785]: I0325 01:34:06.266496 2785 topology_manager.go:215] "Topology Admit Handler" podUID="c282f12a-91ab-466f-89af-734f6050153e" podNamespace="tigera-operator" podName="tigera-operator-6479d6dc54-rp28c" Mar 25 01:34:06.283098 systemd[1]: Created slice kubepods-besteffort-podc282f12a_91ab_466f_89af_734f6050153e.slice - libcontainer container kubepods-besteffort-podc282f12a_91ab_466f_89af_734f6050153e.slice. Mar 25 01:34:06.454510 kubelet[2785]: I0325 01:34:06.454458 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c282f12a-91ab-466f-89af-734f6050153e-var-lib-calico\") pod \"tigera-operator-6479d6dc54-rp28c\" (UID: \"c282f12a-91ab-466f-89af-734f6050153e\") " pod="tigera-operator/tigera-operator-6479d6dc54-rp28c" Mar 25 01:34:06.454727 kubelet[2785]: I0325 01:34:06.454527 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn26d\" (UniqueName: \"kubernetes.io/projected/c282f12a-91ab-466f-89af-734f6050153e-kube-api-access-xn26d\") pod \"tigera-operator-6479d6dc54-rp28c\" (UID: \"c282f12a-91ab-466f-89af-734f6050153e\") " pod="tigera-operator/tigera-operator-6479d6dc54-rp28c" Mar 25 01:34:06.464700 containerd[1510]: time="2025-03-25T01:34:06.464645971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w5m78,Uid:268399ef-cddb-4c08-9a4b-1f1de33ec2d0,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:06.495344 containerd[1510]: time="2025-03-25T01:34:06.494998889Z" level=info msg="connecting to shim 435e4e87e0515f265f1513dfb584c65a4b59ed74409d096246fca9c4763ed857" address="unix:///run/containerd/s/a50126f3c97b38e9dced3d67e3483f0348d86470e2e8144b4d1788d387d19944" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:06.530534 systemd[1]: Started cri-containerd-435e4e87e0515f265f1513dfb584c65a4b59ed74409d096246fca9c4763ed857.scope - libcontainer container 435e4e87e0515f265f1513dfb584c65a4b59ed74409d096246fca9c4763ed857. Mar 25 01:34:06.577232 containerd[1510]: time="2025-03-25T01:34:06.577044411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w5m78,Uid:268399ef-cddb-4c08-9a4b-1f1de33ec2d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"435e4e87e0515f265f1513dfb584c65a4b59ed74409d096246fca9c4763ed857\"" Mar 25 01:34:06.584611 containerd[1510]: time="2025-03-25T01:34:06.582841826Z" level=info msg="CreateContainer within sandbox \"435e4e87e0515f265f1513dfb584c65a4b59ed74409d096246fca9c4763ed857\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 25 01:34:06.588548 containerd[1510]: time="2025-03-25T01:34:06.588504221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-rp28c,Uid:c282f12a-91ab-466f-89af-734f6050153e,Namespace:tigera-operator,Attempt:0,}" Mar 25 01:34:06.599338 containerd[1510]: time="2025-03-25T01:34:06.597433707Z" level=info msg="Container 46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:06.614160 containerd[1510]: time="2025-03-25T01:34:06.614099993Z" level=info msg="CreateContainer within sandbox \"435e4e87e0515f265f1513dfb584c65a4b59ed74409d096246fca9c4763ed857\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357\"" Mar 25 01:34:06.615223 containerd[1510]: time="2025-03-25T01:34:06.615164359Z" level=info msg="StartContainer for \"46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357\"" Mar 25 01:34:06.621464 containerd[1510]: time="2025-03-25T01:34:06.619590937Z" level=info msg="connecting to shim 46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357" address="unix:///run/containerd/s/a50126f3c97b38e9dced3d67e3483f0348d86470e2e8144b4d1788d387d19944" protocol=ttrpc version=3 Mar 25 01:34:06.645613 containerd[1510]: time="2025-03-25T01:34:06.645526407Z" level=info msg="connecting to shim 6e9cefae35ef347588ef4e3d2bfefe4deac85c015ee424c05953fefde5f24077" address="unix:///run/containerd/s/a30f95c329348fa521f1879447b58871dcbddd046ddf385af436226f1aec7301" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:06.662642 systemd[1]: Started cri-containerd-46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357.scope - libcontainer container 46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357. Mar 25 01:34:06.692737 systemd[1]: Started cri-containerd-6e9cefae35ef347588ef4e3d2bfefe4deac85c015ee424c05953fefde5f24077.scope - libcontainer container 6e9cefae35ef347588ef4e3d2bfefe4deac85c015ee424c05953fefde5f24077. Mar 25 01:34:06.759385 containerd[1510]: time="2025-03-25T01:34:06.757447301Z" level=info msg="StartContainer for \"46038f5b2ddf5e3f30b82e8b4dca17df617dc09d097e33240287e56d83181357\" returns successfully" Mar 25 01:34:06.808842 containerd[1510]: time="2025-03-25T01:34:06.808779831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-6479d6dc54-rp28c,Uid:c282f12a-91ab-466f-89af-734f6050153e,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6e9cefae35ef347588ef4e3d2bfefe4deac85c015ee424c05953fefde5f24077\"" Mar 25 01:34:06.812839 containerd[1510]: time="2025-03-25T01:34:06.812504562Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\"" Mar 25 01:34:09.238966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1290510097.mount: Deactivated successfully. Mar 25 01:34:10.040112 containerd[1510]: time="2025-03-25T01:34:10.040038341Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:10.042190 containerd[1510]: time="2025-03-25T01:34:10.042039061Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.5: active requests=0, bytes read=21945008" Mar 25 01:34:10.044336 containerd[1510]: time="2025-03-25T01:34:10.043611058Z" level=info msg="ImageCreate event name:\"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:10.047615 containerd[1510]: time="2025-03-25T01:34:10.047520081Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:10.048735 containerd[1510]: time="2025-03-25T01:34:10.048694335Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.5\" with image id \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\", repo tag \"quay.io/tigera/operator:v1.36.5\", repo digest \"quay.io/tigera/operator@sha256:3341fa9475c0325b86228c8726389f9bae9fd6c430c66fe5cd5dc39d7bb6ad4b\", size \"21941003\" in 3.236137556s" Mar 25 01:34:10.049119 containerd[1510]: time="2025-03-25T01:34:10.048969079Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.5\" returns image reference \"sha256:dc4a8a56c133edb1bc4c3d6bc94bcd96f2bde82413370cb1783ac2d7f3a46d53\"" Mar 25 01:34:10.053235 containerd[1510]: time="2025-03-25T01:34:10.053191788Z" level=info msg="CreateContainer within sandbox \"6e9cefae35ef347588ef4e3d2bfefe4deac85c015ee424c05953fefde5f24077\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Mar 25 01:34:10.066321 containerd[1510]: time="2025-03-25T01:34:10.065440924Z" level=info msg="Container 08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:10.080567 containerd[1510]: time="2025-03-25T01:34:10.080497138Z" level=info msg="CreateContainer within sandbox \"6e9cefae35ef347588ef4e3d2bfefe4deac85c015ee424c05953fefde5f24077\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6\"" Mar 25 01:34:10.083222 containerd[1510]: time="2025-03-25T01:34:10.081408049Z" level=info msg="StartContainer for \"08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6\"" Mar 25 01:34:10.083222 containerd[1510]: time="2025-03-25T01:34:10.082716134Z" level=info msg="connecting to shim 08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6" address="unix:///run/containerd/s/a30f95c329348fa521f1879447b58871dcbddd046ddf385af436226f1aec7301" protocol=ttrpc version=3 Mar 25 01:34:10.120608 systemd[1]: Started cri-containerd-08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6.scope - libcontainer container 08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6. Mar 25 01:34:10.182157 containerd[1510]: time="2025-03-25T01:34:10.181733377Z" level=info msg="StartContainer for \"08329372aae0ecb1c87a06e67ba0d5f8a3b7b0d74a695ac41d647584b8fde9b6\" returns successfully" Mar 25 01:34:10.202328 kubelet[2785]: I0325 01:34:10.201034 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-w5m78" podStartSLOduration=4.201001802 podStartE2EDuration="4.201001802s" podCreationTimestamp="2025-03-25 01:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:34:07.294817096 +0000 UTC m=+17.280692226" watchObservedRunningTime="2025-03-25 01:34:10.201001802 +0000 UTC m=+20.186876934" Mar 25 01:34:10.304824 kubelet[2785]: I0325 01:34:10.303204 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-6479d6dc54-rp28c" podStartSLOduration=1.064201731 podStartE2EDuration="4.303177676s" podCreationTimestamp="2025-03-25 01:34:06 +0000 UTC" firstStartedPulling="2025-03-25 01:34:06.811435562 +0000 UTC m=+16.797310680" lastFinishedPulling="2025-03-25 01:34:10.050411519 +0000 UTC m=+20.036286625" observedRunningTime="2025-03-25 01:34:10.302775062 +0000 UTC m=+20.288650192" watchObservedRunningTime="2025-03-25 01:34:10.303177676 +0000 UTC m=+20.289052806" Mar 25 01:34:13.635157 kubelet[2785]: I0325 01:34:13.633464 2785 topology_manager.go:215] "Topology Admit Handler" podUID="66b18176-004d-49de-83b6-365527954d1a" podNamespace="calico-system" podName="calico-typha-6c79d5dd48-h6wcz" Mar 25 01:34:13.650758 systemd[1]: Created slice kubepods-besteffort-pod66b18176_004d_49de_83b6_365527954d1a.slice - libcontainer container kubepods-besteffort-pod66b18176_004d_49de_83b6_365527954d1a.slice. Mar 25 01:34:13.767800 kubelet[2785]: I0325 01:34:13.767719 2785 topology_manager.go:215] "Topology Admit Handler" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" podNamespace="calico-system" podName="calico-node-9qtgm" Mar 25 01:34:13.784113 systemd[1]: Created slice kubepods-besteffort-pod7220257e_c04d_4718_a529_cbf87b01b9fd.slice - libcontainer container kubepods-besteffort-pod7220257e_c04d_4718_a529_cbf87b01b9fd.slice. Mar 25 01:34:13.802673 kubelet[2785]: I0325 01:34:13.802626 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66b18176-004d-49de-83b6-365527954d1a-typha-certs\") pod \"calico-typha-6c79d5dd48-h6wcz\" (UID: \"66b18176-004d-49de-83b6-365527954d1a\") " pod="calico-system/calico-typha-6c79d5dd48-h6wcz" Mar 25 01:34:13.803007 kubelet[2785]: I0325 01:34:13.802963 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njqjg\" (UniqueName: \"kubernetes.io/projected/66b18176-004d-49de-83b6-365527954d1a-kube-api-access-njqjg\") pod \"calico-typha-6c79d5dd48-h6wcz\" (UID: \"66b18176-004d-49de-83b6-365527954d1a\") " pod="calico-system/calico-typha-6c79d5dd48-h6wcz" Mar 25 01:34:13.803359 kubelet[2785]: I0325 01:34:13.803222 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b18176-004d-49de-83b6-365527954d1a-tigera-ca-bundle\") pod \"calico-typha-6c79d5dd48-h6wcz\" (UID: \"66b18176-004d-49de-83b6-365527954d1a\") " pod="calico-system/calico-typha-6c79d5dd48-h6wcz" Mar 25 01:34:13.901431 kubelet[2785]: I0325 01:34:13.898975 2785 topology_manager.go:215] "Topology Admit Handler" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" podNamespace="calico-system" podName="csi-node-driver-zbxnm" Mar 25 01:34:13.901431 kubelet[2785]: E0325 01:34:13.899402 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:13.904559 kubelet[2785]: I0325 01:34:13.904499 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-lib-calico\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904692 kubelet[2785]: I0325 01:34:13.904570 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-xtables-lock\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904692 kubelet[2785]: I0325 01:34:13.904602 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-lib-modules\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904692 kubelet[2785]: I0325 01:34:13.904626 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7220257e-c04d-4718-a529-cbf87b01b9fd-node-certs\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904692 kubelet[2785]: I0325 01:34:13.904658 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-net-dir\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904922 kubelet[2785]: I0325 01:34:13.904684 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-log-dir\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904922 kubelet[2785]: I0325 01:34:13.904737 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-flexvol-driver-host\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904922 kubelet[2785]: I0325 01:34:13.904773 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjqdt\" (UniqueName: \"kubernetes.io/projected/7220257e-c04d-4718-a529-cbf87b01b9fd-kube-api-access-mjqdt\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904922 kubelet[2785]: I0325 01:34:13.904823 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7220257e-c04d-4718-a529-cbf87b01b9fd-tigera-ca-bundle\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.904922 kubelet[2785]: I0325 01:34:13.904855 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-policysync\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.905138 kubelet[2785]: I0325 01:34:13.904882 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-run-calico\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.905138 kubelet[2785]: I0325 01:34:13.904910 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-bin-dir\") pod \"calico-node-9qtgm\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " pod="calico-system/calico-node-9qtgm" Mar 25 01:34:13.967333 containerd[1510]: time="2025-03-25T01:34:13.966146339Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c79d5dd48-h6wcz,Uid:66b18176-004d-49de-83b6-365527954d1a,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:14.008328 kubelet[2785]: I0325 01:34:14.006441 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/973a62e1-0a63-45ab-979a-41e23aff93de-socket-dir\") pod \"csi-node-driver-zbxnm\" (UID: \"973a62e1-0a63-45ab-979a-41e23aff93de\") " pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:14.011608 kubelet[2785]: I0325 01:34:14.008957 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/973a62e1-0a63-45ab-979a-41e23aff93de-registration-dir\") pod \"csi-node-driver-zbxnm\" (UID: \"973a62e1-0a63-45ab-979a-41e23aff93de\") " pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:14.011608 kubelet[2785]: I0325 01:34:14.009157 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d6c5v\" (UniqueName: \"kubernetes.io/projected/973a62e1-0a63-45ab-979a-41e23aff93de-kube-api-access-d6c5v\") pod \"csi-node-driver-zbxnm\" (UID: \"973a62e1-0a63-45ab-979a-41e23aff93de\") " pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:14.011608 kubelet[2785]: I0325 01:34:14.009665 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/973a62e1-0a63-45ab-979a-41e23aff93de-varrun\") pod \"csi-node-driver-zbxnm\" (UID: \"973a62e1-0a63-45ab-979a-41e23aff93de\") " pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:14.011608 kubelet[2785]: I0325 01:34:14.009842 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/973a62e1-0a63-45ab-979a-41e23aff93de-kubelet-dir\") pod \"csi-node-driver-zbxnm\" (UID: \"973a62e1-0a63-45ab-979a-41e23aff93de\") " pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:14.025376 containerd[1510]: time="2025-03-25T01:34:14.023714727Z" level=info msg="connecting to shim 30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92" address="unix:///run/containerd/s/a89e08328fefa22fe140cca01987777a442748d75666348070a76ae17b80c34b" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:14.027315 kubelet[2785]: E0325 01:34:14.025836 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.027315 kubelet[2785]: W0325 01:34:14.025873 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.027315 kubelet[2785]: E0325 01:34:14.025904 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.033333 kubelet[2785]: E0325 01:34:14.031654 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.033333 kubelet[2785]: W0325 01:34:14.031678 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.033333 kubelet[2785]: E0325 01:34:14.031729 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.059323 kubelet[2785]: E0325 01:34:14.056358 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.059323 kubelet[2785]: W0325 01:34:14.056416 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.059323 kubelet[2785]: E0325 01:34:14.056444 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.086865 systemd[1]: Started cri-containerd-30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92.scope - libcontainer container 30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92. Mar 25 01:34:14.095796 containerd[1510]: time="2025-03-25T01:34:14.095740965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9qtgm,Uid:7220257e-c04d-4718-a529-cbf87b01b9fd,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:14.111658 kubelet[2785]: E0325 01:34:14.111589 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.111658 kubelet[2785]: W0325 01:34:14.111653 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.111901 kubelet[2785]: E0325 01:34:14.111683 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.113405 kubelet[2785]: E0325 01:34:14.113343 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.113405 kubelet[2785]: W0325 01:34:14.113369 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.113702 kubelet[2785]: E0325 01:34:14.113660 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.114568 kubelet[2785]: E0325 01:34:14.114513 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.114568 kubelet[2785]: W0325 01:34:14.114538 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.116388 kubelet[2785]: E0325 01:34:14.114619 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.116388 kubelet[2785]: E0325 01:34:14.116368 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.116388 kubelet[2785]: W0325 01:34:14.116385 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.116547 kubelet[2785]: E0325 01:34:14.116414 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.117151 kubelet[2785]: E0325 01:34:14.116941 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.117151 kubelet[2785]: W0325 01:34:14.116963 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.117151 kubelet[2785]: E0325 01:34:14.117106 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.118525 kubelet[2785]: E0325 01:34:14.118176 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.118525 kubelet[2785]: W0325 01:34:14.118211 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.118525 kubelet[2785]: E0325 01:34:14.118348 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.120652 kubelet[2785]: E0325 01:34:14.118857 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.120652 kubelet[2785]: W0325 01:34:14.118877 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.120652 kubelet[2785]: E0325 01:34:14.118973 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.120652 kubelet[2785]: E0325 01:34:14.119806 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.120652 kubelet[2785]: W0325 01:34:14.119839 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.120652 kubelet[2785]: E0325 01:34:14.120254 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.121606 kubelet[2785]: E0325 01:34:14.121553 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.121606 kubelet[2785]: W0325 01:34:14.121603 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.122204 kubelet[2785]: E0325 01:34:14.122166 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.122644 kubelet[2785]: E0325 01:34:14.122474 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.122740 kubelet[2785]: W0325 01:34:14.122646 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.122805 kubelet[2785]: E0325 01:34:14.122780 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.123748 kubelet[2785]: E0325 01:34:14.123399 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.123748 kubelet[2785]: W0325 01:34:14.123419 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.123748 kubelet[2785]: E0325 01:34:14.123583 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.125320 kubelet[2785]: E0325 01:34:14.124679 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.125320 kubelet[2785]: W0325 01:34:14.124707 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.125494 kubelet[2785]: E0325 01:34:14.125458 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.126848 kubelet[2785]: E0325 01:34:14.126816 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.126848 kubelet[2785]: W0325 01:34:14.126843 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.127003 kubelet[2785]: E0325 01:34:14.126966 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.128310 kubelet[2785]: E0325 01:34:14.127884 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.128310 kubelet[2785]: W0325 01:34:14.128036 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.128632 kubelet[2785]: E0325 01:34:14.128541 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.130223 kubelet[2785]: E0325 01:34:14.129458 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.130223 kubelet[2785]: W0325 01:34:14.129479 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.130223 kubelet[2785]: E0325 01:34:14.130132 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.131028 kubelet[2785]: E0325 01:34:14.130982 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.131283 kubelet[2785]: W0325 01:34:14.131008 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.133316 kubelet[2785]: E0325 01:34:14.132847 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.133601 kubelet[2785]: E0325 01:34:14.133567 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.133707 kubelet[2785]: W0325 01:34:14.133591 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.134327 kubelet[2785]: E0325 01:34:14.133833 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.135385 kubelet[2785]: E0325 01:34:14.135356 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.135385 kubelet[2785]: W0325 01:34:14.135381 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.137332 kubelet[2785]: E0325 01:34:14.137140 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.138024 kubelet[2785]: E0325 01:34:14.137622 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.138024 kubelet[2785]: W0325 01:34:14.137641 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.138178 kubelet[2785]: E0325 01:34:14.138087 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.139019 kubelet[2785]: E0325 01:34:14.138721 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.139019 kubelet[2785]: W0325 01:34:14.138765 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.139672 kubelet[2785]: E0325 01:34:14.139327 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.141003 kubelet[2785]: E0325 01:34:14.140974 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.141138 kubelet[2785]: W0325 01:34:14.141041 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.142344 kubelet[2785]: E0325 01:34:14.141423 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.142645 kubelet[2785]: E0325 01:34:14.142618 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.142645 kubelet[2785]: W0325 01:34:14.142645 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.143084 kubelet[2785]: E0325 01:34:14.143038 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.143209 kubelet[2785]: E0325 01:34:14.143086 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.143728 kubelet[2785]: W0325 01:34:14.143338 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.143728 kubelet[2785]: E0325 01:34:14.143535 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.145340 kubelet[2785]: E0325 01:34:14.145318 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.145507 kubelet[2785]: W0325 01:34:14.145343 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.145507 kubelet[2785]: E0325 01:34:14.145369 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.145802 kubelet[2785]: E0325 01:34:14.145759 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.145802 kubelet[2785]: W0325 01:34:14.145775 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.145802 kubelet[2785]: E0325 01:34:14.145793 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.153666 containerd[1510]: time="2025-03-25T01:34:14.153531445Z" level=info msg="connecting to shim 87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:34:14.187384 kubelet[2785]: E0325 01:34:14.187188 2785 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 25 01:34:14.188570 kubelet[2785]: W0325 01:34:14.187222 2785 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 25 01:34:14.188570 kubelet[2785]: E0325 01:34:14.188380 2785 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 25 01:34:14.221700 systemd[1]: Started cri-containerd-87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896.scope - libcontainer container 87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896. Mar 25 01:34:14.283939 containerd[1510]: time="2025-03-25T01:34:14.283879520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-9qtgm,Uid:7220257e-c04d-4718-a529-cbf87b01b9fd,Namespace:calico-system,Attempt:0,} returns sandbox id \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\"" Mar 25 01:34:14.287106 containerd[1510]: time="2025-03-25T01:34:14.287058820Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 25 01:34:14.338323 containerd[1510]: time="2025-03-25T01:34:14.337471342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c79d5dd48-h6wcz,Uid:66b18176-004d-49de-83b6-365527954d1a,Namespace:calico-system,Attempt:0,} returns sandbox id \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\"" Mar 25 01:34:15.229959 containerd[1510]: time="2025-03-25T01:34:15.229889903Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:15.231681 containerd[1510]: time="2025-03-25T01:34:15.231577561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=5364011" Mar 25 01:34:15.233267 containerd[1510]: time="2025-03-25T01:34:15.233197167Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:15.236688 containerd[1510]: time="2025-03-25T01:34:15.236605212Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:15.237695 containerd[1510]: time="2025-03-25T01:34:15.237641880Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 950.534815ms" Mar 25 01:34:15.237806 containerd[1510]: time="2025-03-25T01:34:15.237701322Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 25 01:34:15.241835 containerd[1510]: time="2025-03-25T01:34:15.241519848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\"" Mar 25 01:34:15.243944 containerd[1510]: time="2025-03-25T01:34:15.243120315Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 01:34:15.257304 containerd[1510]: time="2025-03-25T01:34:15.257238801Z" level=info msg="Container 4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:15.270832 containerd[1510]: time="2025-03-25T01:34:15.270770165Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\"" Mar 25 01:34:15.271809 containerd[1510]: time="2025-03-25T01:34:15.271769821Z" level=info msg="StartContainer for \"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\"" Mar 25 01:34:15.274316 containerd[1510]: time="2025-03-25T01:34:15.274203996Z" level=info msg="connecting to shim 4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" protocol=ttrpc version=3 Mar 25 01:34:15.303556 systemd[1]: Started cri-containerd-4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849.scope - libcontainer container 4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849. Mar 25 01:34:15.384570 containerd[1510]: time="2025-03-25T01:34:15.384485277Z" level=info msg="StartContainer for \"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" returns successfully" Mar 25 01:34:15.408374 systemd[1]: cri-containerd-4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849.scope: Deactivated successfully. Mar 25 01:34:15.413955 containerd[1510]: time="2025-03-25T01:34:15.413867735Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" id:\"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" pid:3320 exited_at:{seconds:1742866455 nanos:413050822}" Mar 25 01:34:15.414682 containerd[1510]: time="2025-03-25T01:34:15.414584916Z" level=info msg="received exit event container_id:\"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" id:\"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" pid:3320 exited_at:{seconds:1742866455 nanos:413050822}" Mar 25 01:34:15.455861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849-rootfs.mount: Deactivated successfully. Mar 25 01:34:16.174316 kubelet[2785]: E0325 01:34:16.173039 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:17.219060 containerd[1510]: time="2025-03-25T01:34:17.218987972Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:17.220618 containerd[1510]: time="2025-03-25T01:34:17.220530906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.2: active requests=0, bytes read=30414075" Mar 25 01:34:17.222336 containerd[1510]: time="2025-03-25T01:34:17.222203972Z" level=info msg="ImageCreate event name:\"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:17.225521 containerd[1510]: time="2025-03-25T01:34:17.225451981Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:17.226582 containerd[1510]: time="2025-03-25T01:34:17.226419082Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.2\" with image id \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:9839fd34b4c1bad50beed72aec59c64893487a46eea57dc2d7d66c3041d7bcce\", size \"31907171\" in 1.984843583s" Mar 25 01:34:17.226582 containerd[1510]: time="2025-03-25T01:34:17.226465833Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.2\" returns image reference \"sha256:1d6f9d005866d74e6f0a8b0b8b743d0eaf4efcb7c7032fd2215da9c6ca131cb5\"" Mar 25 01:34:17.228994 containerd[1510]: time="2025-03-25T01:34:17.227939035Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 25 01:34:17.251528 containerd[1510]: time="2025-03-25T01:34:17.251455134Z" level=info msg="CreateContainer within sandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Mar 25 01:34:17.261577 containerd[1510]: time="2025-03-25T01:34:17.261522250Z" level=info msg="Container 443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:17.275271 containerd[1510]: time="2025-03-25T01:34:17.275206570Z" level=info msg="CreateContainer within sandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\"" Mar 25 01:34:17.277731 containerd[1510]: time="2025-03-25T01:34:17.275919395Z" level=info msg="StartContainer for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\"" Mar 25 01:34:17.277731 containerd[1510]: time="2025-03-25T01:34:17.277581095Z" level=info msg="connecting to shim 443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb" address="unix:///run/containerd/s/a89e08328fefa22fe140cca01987777a442748d75666348070a76ae17b80c34b" protocol=ttrpc version=3 Mar 25 01:34:17.325569 systemd[1]: Started cri-containerd-443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb.scope - libcontainer container 443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb. Mar 25 01:34:17.414917 containerd[1510]: time="2025-03-25T01:34:17.414834979Z" level=info msg="StartContainer for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" returns successfully" Mar 25 01:34:18.174536 kubelet[2785]: E0325 01:34:18.172898 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:18.378669 kubelet[2785]: I0325 01:34:18.377738 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6c79d5dd48-h6wcz" podStartSLOduration=2.488999488 podStartE2EDuration="5.37755727s" podCreationTimestamp="2025-03-25 01:34:13 +0000 UTC" firstStartedPulling="2025-03-25 01:34:14.3391644 +0000 UTC m=+24.325039519" lastFinishedPulling="2025-03-25 01:34:17.227722178 +0000 UTC m=+27.213597301" observedRunningTime="2025-03-25 01:34:18.375658397 +0000 UTC m=+28.361533519" watchObservedRunningTime="2025-03-25 01:34:18.37755727 +0000 UTC m=+28.363432417" Mar 25 01:34:19.359790 kubelet[2785]: I0325 01:34:19.357868 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:34:20.175276 kubelet[2785]: E0325 01:34:20.175227 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:21.491851 containerd[1510]: time="2025-03-25T01:34:21.491779822Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:21.493216 containerd[1510]: time="2025-03-25T01:34:21.493133797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 25 01:34:21.494811 containerd[1510]: time="2025-03-25T01:34:21.494678174Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:21.498001 containerd[1510]: time="2025-03-25T01:34:21.497915403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:21.499341 containerd[1510]: time="2025-03-25T01:34:21.498799872Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 4.270819207s" Mar 25 01:34:21.499341 containerd[1510]: time="2025-03-25T01:34:21.498850763Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 25 01:34:21.502218 containerd[1510]: time="2025-03-25T01:34:21.502149092Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 01:34:21.519537 containerd[1510]: time="2025-03-25T01:34:21.519471327Z" level=info msg="Container 4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:21.542143 containerd[1510]: time="2025-03-25T01:34:21.542083936Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\"" Mar 25 01:34:21.545631 containerd[1510]: time="2025-03-25T01:34:21.545532121Z" level=info msg="StartContainer for \"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\"" Mar 25 01:34:21.550326 containerd[1510]: time="2025-03-25T01:34:21.547707598Z" level=info msg="connecting to shim 4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" protocol=ttrpc version=3 Mar 25 01:34:21.622483 systemd[1]: Started cri-containerd-4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb.scope - libcontainer container 4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb. Mar 25 01:34:21.717528 containerd[1510]: time="2025-03-25T01:34:21.717438622Z" level=info msg="StartContainer for \"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" returns successfully" Mar 25 01:34:22.175977 kubelet[2785]: E0325 01:34:22.174569 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:22.808945 containerd[1510]: time="2025-03-25T01:34:22.808879261Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 25 01:34:22.812767 systemd[1]: cri-containerd-4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb.scope: Deactivated successfully. Mar 25 01:34:22.813401 systemd[1]: cri-containerd-4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb.scope: Consumed 686ms CPU time, 176.6M memory peak, 154M written to disk. Mar 25 01:34:22.817125 containerd[1510]: time="2025-03-25T01:34:22.816591474Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" id:\"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" pid:3423 exited_at:{seconds:1742866462 nanos:814564902}" Mar 25 01:34:22.817125 containerd[1510]: time="2025-03-25T01:34:22.816595671Z" level=info msg="received exit event container_id:\"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" id:\"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" pid:3423 exited_at:{seconds:1742866462 nanos:814564902}" Mar 25 01:34:22.834241 kubelet[2785]: I0325 01:34:22.833580 2785 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 25 01:34:22.862976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb-rootfs.mount: Deactivated successfully. Mar 25 01:34:22.890421 kubelet[2785]: I0325 01:34:22.888362 2785 topology_manager.go:215] "Topology Admit Handler" podUID="0ae54105-2fc2-4712-bcc0-03edef18e454" podNamespace="kube-system" podName="coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:22.899604 kubelet[2785]: I0325 01:34:22.899566 2785 topology_manager.go:215] "Topology Admit Handler" podUID="58b0bc19-665e-4afb-b4c7-59149aa71196" podNamespace="calico-apiserver" podName="calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:22.900252 kubelet[2785]: I0325 01:34:22.900028 2785 topology_manager.go:215] "Topology Admit Handler" podUID="7890deb5-f2f5-4b95-bbc2-b10706e5343d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-glkrr" Mar 25 01:34:22.902992 kubelet[2785]: I0325 01:34:22.901829 2785 topology_manager.go:215] "Topology Admit Handler" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" podNamespace="calico-system" podName="calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:22.915178 kubelet[2785]: I0325 01:34:22.908990 2785 topology_manager.go:215] "Topology Admit Handler" podUID="7e7c4ede-6986-4ec3-afbd-544ef8b098a2" podNamespace="calico-apiserver" podName="calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:22.909599 systemd[1]: Created slice kubepods-burstable-pod0ae54105_2fc2_4712_bcc0_03edef18e454.slice - libcontainer container kubepods-burstable-pod0ae54105_2fc2_4712_bcc0_03edef18e454.slice. Mar 25 01:34:22.934632 systemd[1]: Created slice kubepods-besteffort-pod58b0bc19_665e_4afb_b4c7_59149aa71196.slice - libcontainer container kubepods-besteffort-pod58b0bc19_665e_4afb_b4c7_59149aa71196.slice. Mar 25 01:34:22.947600 systemd[1]: Created slice kubepods-burstable-pod7890deb5_f2f5_4b95_bbc2_b10706e5343d.slice - libcontainer container kubepods-burstable-pod7890deb5_f2f5_4b95_bbc2_b10706e5343d.slice. Mar 25 01:34:22.966834 systemd[1]: Created slice kubepods-besteffort-podebc6596e_ec52_4fcc_a122_1403cd0be893.slice - libcontainer container kubepods-besteffort-podebc6596e_ec52_4fcc_a122_1403cd0be893.slice. Mar 25 01:34:22.978257 systemd[1]: Created slice kubepods-besteffort-pod7e7c4ede_6986_4ec3_afbd_544ef8b098a2.slice - libcontainer container kubepods-besteffort-pod7e7c4ede_6986_4ec3_afbd_544ef8b098a2.slice. Mar 25 01:34:22.995810 kubelet[2785]: I0325 01:34:22.995165 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7e7c4ede-6986-4ec3-afbd-544ef8b098a2-calico-apiserver-certs\") pod \"calico-apiserver-96c65cfff-4c2dm\" (UID: \"7e7c4ede-6986-4ec3-afbd-544ef8b098a2\") " pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:22.995810 kubelet[2785]: I0325 01:34:22.995358 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqp5n\" (UniqueName: \"kubernetes.io/projected/7e7c4ede-6986-4ec3-afbd-544ef8b098a2-kube-api-access-vqp5n\") pod \"calico-apiserver-96c65cfff-4c2dm\" (UID: \"7e7c4ede-6986-4ec3-afbd-544ef8b098a2\") " pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:22.995810 kubelet[2785]: I0325 01:34:22.995425 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/58b0bc19-665e-4afb-b4c7-59149aa71196-calico-apiserver-certs\") pod \"calico-apiserver-96c65cfff-xcvmz\" (UID: \"58b0bc19-665e-4afb-b4c7-59149aa71196\") " pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:22.995810 kubelet[2785]: I0325 01:34:22.995459 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7l7k\" (UniqueName: \"kubernetes.io/projected/ebc6596e-ec52-4fcc-a122-1403cd0be893-kube-api-access-v7l7k\") pod \"calico-kube-controllers-7b59f6c75b-4wn56\" (UID: \"ebc6596e-ec52-4fcc-a122-1403cd0be893\") " pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:22.995810 kubelet[2785]: I0325 01:34:22.995490 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7890deb5-f2f5-4b95-bbc2-b10706e5343d-config-volume\") pod \"coredns-7db6d8ff4d-glkrr\" (UID: \"7890deb5-f2f5-4b95-bbc2-b10706e5343d\") " pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:22.996466 kubelet[2785]: I0325 01:34:22.995524 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ae54105-2fc2-4712-bcc0-03edef18e454-config-volume\") pod \"coredns-7db6d8ff4d-pkm5c\" (UID: \"0ae54105-2fc2-4712-bcc0-03edef18e454\") " pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:22.996466 kubelet[2785]: I0325 01:34:22.995556 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sntlx\" (UniqueName: \"kubernetes.io/projected/0ae54105-2fc2-4712-bcc0-03edef18e454-kube-api-access-sntlx\") pod \"coredns-7db6d8ff4d-pkm5c\" (UID: \"0ae54105-2fc2-4712-bcc0-03edef18e454\") " pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:22.996466 kubelet[2785]: I0325 01:34:22.995587 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dqjm\" (UniqueName: \"kubernetes.io/projected/58b0bc19-665e-4afb-b4c7-59149aa71196-kube-api-access-2dqjm\") pod \"calico-apiserver-96c65cfff-xcvmz\" (UID: \"58b0bc19-665e-4afb-b4c7-59149aa71196\") " pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:22.996466 kubelet[2785]: I0325 01:34:22.995616 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjz8r\" (UniqueName: \"kubernetes.io/projected/7890deb5-f2f5-4b95-bbc2-b10706e5343d-kube-api-access-jjz8r\") pod \"coredns-7db6d8ff4d-glkrr\" (UID: \"7890deb5-f2f5-4b95-bbc2-b10706e5343d\") " pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:22.996466 kubelet[2785]: I0325 01:34:22.995645 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc6596e-ec52-4fcc-a122-1403cd0be893-tigera-ca-bundle\") pod \"calico-kube-controllers-7b59f6c75b-4wn56\" (UID: \"ebc6596e-ec52-4fcc-a122-1403cd0be893\") " pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:23.070676 systemd[1]: Started sshd@7-10.128.0.106:22-194.0.234.35:40070.service - OpenSSH per-connection server daemon (194.0.234.35:40070). Mar 25 01:34:23.225275 containerd[1510]: time="2025-03-25T01:34:23.225211897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:23.246335 containerd[1510]: time="2025-03-25T01:34:23.246252406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:34:23.259473 containerd[1510]: time="2025-03-25T01:34:23.259400835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:23.277387 containerd[1510]: time="2025-03-25T01:34:23.277007676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:23.284375 containerd[1510]: time="2025-03-25T01:34:23.284327118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:34:24.065467 sshd[3452]: Connection closed by authenticating user root 194.0.234.35 port 40070 [preauth] Mar 25 01:34:24.068639 systemd[1]: sshd@7-10.128.0.106:22-194.0.234.35:40070.service: Deactivated successfully. Mar 25 01:34:24.172335 containerd[1510]: time="2025-03-25T01:34:24.172258882Z" level=error msg="Failed to destroy network for sandbox \"936cb2c9ac30b7366663b16c7dd741991a59b340f61cb0944194472b070fa1e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.179397 containerd[1510]: time="2025-03-25T01:34:24.179339244Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cb2c9ac30b7366663b16c7dd741991a59b340f61cb0944194472b070fa1e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.180683 systemd[1]: run-netns-cni\x2dce8f2416\x2d5e84\x2d897e\x2d10ec\x2dfe8d625f7e8b.mount: Deactivated successfully. Mar 25 01:34:24.187510 kubelet[2785]: E0325 01:34:24.185335 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cb2c9ac30b7366663b16c7dd741991a59b340f61cb0944194472b070fa1e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.187510 kubelet[2785]: E0325 01:34:24.185423 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cb2c9ac30b7366663b16c7dd741991a59b340f61cb0944194472b070fa1e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:24.187510 kubelet[2785]: E0325 01:34:24.185457 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"936cb2c9ac30b7366663b16c7dd741991a59b340f61cb0944194472b070fa1e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:24.188403 kubelet[2785]: E0325 01:34:24.185527 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"936cb2c9ac30b7366663b16c7dd741991a59b340f61cb0944194472b070fa1e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pkm5c" podUID="0ae54105-2fc2-4712-bcc0-03edef18e454" Mar 25 01:34:24.201938 systemd[1]: Created slice kubepods-besteffort-pod973a62e1_0a63_45ab_979a_41e23aff93de.slice - libcontainer container kubepods-besteffort-pod973a62e1_0a63_45ab_979a_41e23aff93de.slice. Mar 25 01:34:24.209662 containerd[1510]: time="2025-03-25T01:34:24.209615903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:24.210842 containerd[1510]: time="2025-03-25T01:34:24.209959705Z" level=error msg="Failed to destroy network for sandbox \"71f045d759a3454dd3e14c915c09415128724dcf4c1a39ff457f656df26d09dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.217711 systemd[1]: run-netns-cni\x2dc14fa5b5\x2dfe12\x2d4069\x2d3c41\x2d421a0d5d1274.mount: Deactivated successfully. Mar 25 01:34:24.224620 containerd[1510]: time="2025-03-25T01:34:24.224466171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f045d759a3454dd3e14c915c09415128724dcf4c1a39ff457f656df26d09dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.225837 kubelet[2785]: E0325 01:34:24.225023 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f045d759a3454dd3e14c915c09415128724dcf4c1a39ff457f656df26d09dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.225837 kubelet[2785]: E0325 01:34:24.225335 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f045d759a3454dd3e14c915c09415128724dcf4c1a39ff457f656df26d09dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:24.225837 kubelet[2785]: E0325 01:34:24.225378 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f045d759a3454dd3e14c915c09415128724dcf4c1a39ff457f656df26d09dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:24.226394 kubelet[2785]: E0325 01:34:24.225683 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71f045d759a3454dd3e14c915c09415128724dcf4c1a39ff457f656df26d09dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" podUID="58b0bc19-665e-4afb-b4c7-59149aa71196" Mar 25 01:34:24.252200 containerd[1510]: time="2025-03-25T01:34:24.252132332Z" level=error msg="Failed to destroy network for sandbox \"62dc79ab4cf6f0d77120a4b6cd05e4e81124be331c7fc869c8b4138189fe6699\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.256025 containerd[1510]: time="2025-03-25T01:34:24.255955011Z" level=error msg="Failed to destroy network for sandbox \"ca78b5a395825094770ebf03477c8cc1c12c4d40c5b7ff91fb36f33c20bff7af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.256954 containerd[1510]: time="2025-03-25T01:34:24.256503373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc79ab4cf6f0d77120a4b6cd05e4e81124be331c7fc869c8b4138189fe6699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.257219 kubelet[2785]: E0325 01:34:24.257048 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc79ab4cf6f0d77120a4b6cd05e4e81124be331c7fc869c8b4138189fe6699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.257380 kubelet[2785]: E0325 01:34:24.257132 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc79ab4cf6f0d77120a4b6cd05e4e81124be331c7fc869c8b4138189fe6699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:24.257380 kubelet[2785]: E0325 01:34:24.257348 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc79ab4cf6f0d77120a4b6cd05e4e81124be331c7fc869c8b4138189fe6699\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:24.257928 kubelet[2785]: E0325 01:34:24.257546 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62dc79ab4cf6f0d77120a4b6cd05e4e81124be331c7fc869c8b4138189fe6699\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-glkrr" podUID="7890deb5-f2f5-4b95-bbc2-b10706e5343d" Mar 25 01:34:24.259880 containerd[1510]: time="2025-03-25T01:34:24.259799365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca78b5a395825094770ebf03477c8cc1c12c4d40c5b7ff91fb36f33c20bff7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.260212 kubelet[2785]: E0325 01:34:24.260053 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca78b5a395825094770ebf03477c8cc1c12c4d40c5b7ff91fb36f33c20bff7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.260212 kubelet[2785]: E0325 01:34:24.260145 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca78b5a395825094770ebf03477c8cc1c12c4d40c5b7ff91fb36f33c20bff7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:24.260212 kubelet[2785]: E0325 01:34:24.260175 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca78b5a395825094770ebf03477c8cc1c12c4d40c5b7ff91fb36f33c20bff7af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:24.260687 kubelet[2785]: E0325 01:34:24.260283 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca78b5a395825094770ebf03477c8cc1c12c4d40c5b7ff91fb36f33c20bff7af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" podUID="7e7c4ede-6986-4ec3-afbd-544ef8b098a2" Mar 25 01:34:24.266849 containerd[1510]: time="2025-03-25T01:34:24.266743881Z" level=error msg="Failed to destroy network for sandbox \"92581cd342f407325cba4b16d761d06384008ef84fe30fe57a1fd8e9cda56b8d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.269379 containerd[1510]: time="2025-03-25T01:34:24.268956156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92581cd342f407325cba4b16d761d06384008ef84fe30fe57a1fd8e9cda56b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.269562 kubelet[2785]: E0325 01:34:24.269389 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92581cd342f407325cba4b16d761d06384008ef84fe30fe57a1fd8e9cda56b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.269562 kubelet[2785]: E0325 01:34:24.269484 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92581cd342f407325cba4b16d761d06384008ef84fe30fe57a1fd8e9cda56b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:24.269562 kubelet[2785]: E0325 01:34:24.269536 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92581cd342f407325cba4b16d761d06384008ef84fe30fe57a1fd8e9cda56b8d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:24.269755 kubelet[2785]: E0325 01:34:24.269627 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92581cd342f407325cba4b16d761d06384008ef84fe30fe57a1fd8e9cda56b8d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" Mar 25 01:34:24.314881 containerd[1510]: time="2025-03-25T01:34:24.314790223Z" level=error msg="Failed to destroy network for sandbox \"fc85774ab26d77a31c5e5c4eb6708ff0ab84364053ee206ab66530e3c49d3f58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.316803 containerd[1510]: time="2025-03-25T01:34:24.316655857Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc85774ab26d77a31c5e5c4eb6708ff0ab84364053ee206ab66530e3c49d3f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.317547 kubelet[2785]: E0325 01:34:24.317500 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc85774ab26d77a31c5e5c4eb6708ff0ab84364053ee206ab66530e3c49d3f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:24.318198 kubelet[2785]: E0325 01:34:24.317758 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc85774ab26d77a31c5e5c4eb6708ff0ab84364053ee206ab66530e3c49d3f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:24.318198 kubelet[2785]: E0325 01:34:24.317820 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc85774ab26d77a31c5e5c4eb6708ff0ab84364053ee206ab66530e3c49d3f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:24.318198 kubelet[2785]: E0325 01:34:24.317910 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc85774ab26d77a31c5e5c4eb6708ff0ab84364053ee206ab66530e3c49d3f58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:24.381486 containerd[1510]: time="2025-03-25T01:34:24.381424482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 25 01:34:24.859203 systemd[1]: run-netns-cni\x2d91c0cb3a\x2d4ab6\x2d68ec\x2d272d\x2d8f8ef3de1ba7.mount: Deactivated successfully. Mar 25 01:34:24.859444 systemd[1]: run-netns-cni\x2d8df843c7\x2dc992\x2d3e54\x2d1fb5\x2df5a66d743084.mount: Deactivated successfully. Mar 25 01:34:24.859550 systemd[1]: run-netns-cni\x2d7cc109ee\x2d3448\x2d9a9c\x2d59c1\x2d6f5649d9a332.mount: Deactivated successfully. Mar 25 01:34:30.572820 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3298181765.mount: Deactivated successfully. Mar 25 01:34:30.614205 containerd[1510]: time="2025-03-25T01:34:30.614124915Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:30.615617 containerd[1510]: time="2025-03-25T01:34:30.615507952Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 25 01:34:30.617182 containerd[1510]: time="2025-03-25T01:34:30.617090828Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:30.620038 containerd[1510]: time="2025-03-25T01:34:30.619996940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:34:30.621315 containerd[1510]: time="2025-03-25T01:34:30.620799064Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 6.239329226s" Mar 25 01:34:30.621315 containerd[1510]: time="2025-03-25T01:34:30.620847509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 25 01:34:30.647504 containerd[1510]: time="2025-03-25T01:34:30.644690926Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 01:34:30.662831 containerd[1510]: time="2025-03-25T01:34:30.662785392Z" level=info msg="Container a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:30.672845 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2066570547.mount: Deactivated successfully. Mar 25 01:34:30.681164 containerd[1510]: time="2025-03-25T01:34:30.681089466Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\"" Mar 25 01:34:30.683502 containerd[1510]: time="2025-03-25T01:34:30.681784982Z" level=info msg="StartContainer for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\"" Mar 25 01:34:30.684166 containerd[1510]: time="2025-03-25T01:34:30.684127296Z" level=info msg="connecting to shim a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" protocol=ttrpc version=3 Mar 25 01:34:30.710985 systemd[1]: Started cri-containerd-a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae.scope - libcontainer container a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae. Mar 25 01:34:30.778718 containerd[1510]: time="2025-03-25T01:34:30.778661261Z" level=info msg="StartContainer for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" returns successfully" Mar 25 01:34:30.889184 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 25 01:34:30.889381 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 25 01:34:30.918612 systemd[1]: cri-containerd-a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae.scope: Deactivated successfully. Mar 25 01:34:30.920701 containerd[1510]: time="2025-03-25T01:34:30.920626288Z" level=info msg="received exit event container_id:\"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" id:\"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" pid:3668 exit_status:1 exited_at:{seconds:1742866470 nanos:919992154}" Mar 25 01:34:30.921259 containerd[1510]: time="2025-03-25T01:34:30.921199609Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" id:\"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" pid:3668 exit_status:1 exited_at:{seconds:1742866470 nanos:919992154}" Mar 25 01:34:32.386207 containerd[1510]: time="2025-03-25T01:34:32.386142109Z" level=error msg="ExecSync for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"42331428184441d7c40d2caad07ecbcf7915089dd925ea00317ec93bfc3a2c6e\": ttrpc: closed" Mar 25 01:34:32.387929 containerd[1510]: time="2025-03-25T01:34:32.385683791Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9->@: write: broken pipe" runtime=io.containerd.runc.v2 Mar 25 01:34:32.388266 kubelet[2785]: E0325 01:34:32.388188 2785 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"42331428184441d7c40d2caad07ecbcf7915089dd925ea00317ec93bfc3a2c6e\": ttrpc: closed" containerID="a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 25 01:34:32.390634 containerd[1510]: time="2025-03-25T01:34:32.390173502Z" level=error msg="ExecSync for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Mar 25 01:34:32.390838 kubelet[2785]: E0325 01:34:32.390443 2785 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 25 01:34:32.391746 containerd[1510]: time="2025-03-25T01:34:32.391231034Z" level=error msg="ExecSync for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Mar 25 01:34:32.391883 kubelet[2785]: E0325 01:34:32.391408 2785 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 25 01:34:32.413639 kubelet[2785]: I0325 01:34:32.413605 2785 scope.go:117] "RemoveContainer" containerID="a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae" Mar 25 01:34:32.419362 containerd[1510]: time="2025-03-25T01:34:32.419315483Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Mar 25 01:34:32.438100 containerd[1510]: time="2025-03-25T01:34:32.434357672Z" level=info msg="Container 4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:32.450586 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3999291686.mount: Deactivated successfully. Mar 25 01:34:32.466952 containerd[1510]: time="2025-03-25T01:34:32.466870945Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\"" Mar 25 01:34:32.467602 containerd[1510]: time="2025-03-25T01:34:32.467565265Z" level=info msg="StartContainer for \"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\"" Mar 25 01:34:32.471315 containerd[1510]: time="2025-03-25T01:34:32.470510988Z" level=info msg="connecting to shim 4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" protocol=ttrpc version=3 Mar 25 01:34:32.514631 systemd[1]: Started cri-containerd-4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240.scope - libcontainer container 4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240. Mar 25 01:34:32.585320 containerd[1510]: time="2025-03-25T01:34:32.583384509Z" level=info msg="StartContainer for \"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\" returns successfully" Mar 25 01:34:32.647713 systemd[1]: cri-containerd-4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240.scope: Deactivated successfully. Mar 25 01:34:32.649849 containerd[1510]: time="2025-03-25T01:34:32.648930332Z" level=info msg="received exit event container_id:\"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\" id:\"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\" pid:3720 exit_status:1 exited_at:{seconds:1742866472 nanos:648627219}" Mar 25 01:34:32.649849 containerd[1510]: time="2025-03-25T01:34:32.649281102Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\" id:\"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\" pid:3720 exit_status:1 exited_at:{seconds:1742866472 nanos:648627219}" Mar 25 01:34:32.688724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240-rootfs.mount: Deactivated successfully. Mar 25 01:34:33.423868 kubelet[2785]: I0325 01:34:33.423829 2785 scope.go:117] "RemoveContainer" containerID="a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae" Mar 25 01:34:33.426131 kubelet[2785]: I0325 01:34:33.424394 2785 scope.go:117] "RemoveContainer" containerID="4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240" Mar 25 01:34:33.426131 kubelet[2785]: E0325 01:34:33.425236 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9qtgm_calico-system(7220257e-c04d-4718-a529-cbf87b01b9fd)\"" pod="calico-system/calico-node-9qtgm" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" Mar 25 01:34:33.431415 containerd[1510]: time="2025-03-25T01:34:33.431370927Z" level=info msg="RemoveContainer for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\"" Mar 25 01:34:33.442415 containerd[1510]: time="2025-03-25T01:34:33.442369328Z" level=info msg="RemoveContainer for \"a16ed464f35103b1868ddc092df9fa71f0c5fcce39030b4a7cfd07be2daaccae\" returns successfully" Mar 25 01:34:34.434751 kubelet[2785]: I0325 01:34:34.434693 2785 scope.go:117] "RemoveContainer" containerID="4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240" Mar 25 01:34:34.437357 kubelet[2785]: E0325 01:34:34.435577 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-9qtgm_calico-system(7220257e-c04d-4718-a529-cbf87b01b9fd)\"" pod="calico-system/calico-node-9qtgm" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" Mar 25 01:34:35.173261 containerd[1510]: time="2025-03-25T01:34:35.173197649Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:35.174013 containerd[1510]: time="2025-03-25T01:34:35.173197645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:35.306641 containerd[1510]: time="2025-03-25T01:34:35.306539506Z" level=error msg="Failed to destroy network for sandbox \"62dc298d490180962346e55e381be19bc43f7f9f2cf5a262239cb230a9a67892\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:35.307544 containerd[1510]: time="2025-03-25T01:34:35.307428723Z" level=error msg="Failed to destroy network for sandbox \"b980f7890b6ee1afa6ba8081f1cebf99c38446f84ed17e09b677c3493fa24f7e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:35.310261 containerd[1510]: time="2025-03-25T01:34:35.310174991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc298d490180962346e55e381be19bc43f7f9f2cf5a262239cb230a9a67892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:35.312081 kubelet[2785]: E0325 01:34:35.311911 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc298d490180962346e55e381be19bc43f7f9f2cf5a262239cb230a9a67892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:35.312244 containerd[1510]: time="2025-03-25T01:34:35.312055704Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b980f7890b6ee1afa6ba8081f1cebf99c38446f84ed17e09b677c3493fa24f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:35.313540 systemd[1]: run-netns-cni\x2d00932dbc\x2d9671\x2d7904\x2ddfa8\x2dc9598f4b0785.mount: Deactivated successfully. Mar 25 01:34:35.315881 kubelet[2785]: E0325 01:34:35.313790 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b980f7890b6ee1afa6ba8081f1cebf99c38446f84ed17e09b677c3493fa24f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:35.315881 kubelet[2785]: E0325 01:34:35.315494 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc298d490180962346e55e381be19bc43f7f9f2cf5a262239cb230a9a67892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:35.315881 kubelet[2785]: E0325 01:34:35.315557 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b980f7890b6ee1afa6ba8081f1cebf99c38446f84ed17e09b677c3493fa24f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:35.315881 kubelet[2785]: E0325 01:34:35.315597 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b980f7890b6ee1afa6ba8081f1cebf99c38446f84ed17e09b677c3493fa24f7e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:35.316189 kubelet[2785]: E0325 01:34:35.315637 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62dc298d490180962346e55e381be19bc43f7f9f2cf5a262239cb230a9a67892\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:35.316189 kubelet[2785]: E0325 01:34:35.315751 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b980f7890b6ee1afa6ba8081f1cebf99c38446f84ed17e09b677c3493fa24f7e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:35.320139 systemd[1]: run-netns-cni\x2d15da9ba6\x2ddf47\x2d4956\x2d60f6\x2dd96942695b96.mount: Deactivated successfully. Mar 25 01:34:35.323086 kubelet[2785]: E0325 01:34:35.315804 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62dc298d490180962346e55e381be19bc43f7f9f2cf5a262239cb230a9a67892\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" Mar 25 01:34:36.173999 containerd[1510]: time="2025-03-25T01:34:36.173920500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:34:36.256967 containerd[1510]: time="2025-03-25T01:34:36.256869301Z" level=error msg="Failed to destroy network for sandbox \"de4fd76263cf1c5c5c2432b7bb08dbec6a3cfcc27b8c14caa38a7987ac8c388b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:36.261830 containerd[1510]: time="2025-03-25T01:34:36.261757721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4fd76263cf1c5c5c2432b7bb08dbec6a3cfcc27b8c14caa38a7987ac8c388b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:36.262283 kubelet[2785]: E0325 01:34:36.262174 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4fd76263cf1c5c5c2432b7bb08dbec6a3cfcc27b8c14caa38a7987ac8c388b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:36.263262 kubelet[2785]: E0325 01:34:36.262272 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4fd76263cf1c5c5c2432b7bb08dbec6a3cfcc27b8c14caa38a7987ac8c388b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:36.263262 kubelet[2785]: E0325 01:34:36.262369 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de4fd76263cf1c5c5c2432b7bb08dbec6a3cfcc27b8c14caa38a7987ac8c388b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:36.263262 kubelet[2785]: E0325 01:34:36.262454 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de4fd76263cf1c5c5c2432b7bb08dbec6a3cfcc27b8c14caa38a7987ac8c388b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" podUID="7e7c4ede-6986-4ec3-afbd-544ef8b098a2" Mar 25 01:34:36.263812 systemd[1]: run-netns-cni\x2dfdc721b0\x2d2809\x2d1472\x2da0d9\x2d8355301bd2a4.mount: Deactivated successfully. Mar 25 01:34:37.173823 containerd[1510]: time="2025-03-25T01:34:37.173702628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:37.244577 containerd[1510]: time="2025-03-25T01:34:37.244492700Z" level=error msg="Failed to destroy network for sandbox \"ed563b0516c815f6109f3897ddc365368b7bd40fc43854426feabe27f4ae3965\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:37.247012 containerd[1510]: time="2025-03-25T01:34:37.246090080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed563b0516c815f6109f3897ddc365368b7bd40fc43854426feabe27f4ae3965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:37.247553 kubelet[2785]: E0325 01:34:37.247496 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed563b0516c815f6109f3897ddc365368b7bd40fc43854426feabe27f4ae3965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:37.247691 kubelet[2785]: E0325 01:34:37.247607 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed563b0516c815f6109f3897ddc365368b7bd40fc43854426feabe27f4ae3965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:37.247691 kubelet[2785]: E0325 01:34:37.247642 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed563b0516c815f6109f3897ddc365368b7bd40fc43854426feabe27f4ae3965\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:37.247788 kubelet[2785]: E0325 01:34:37.247717 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed563b0516c815f6109f3897ddc365368b7bd40fc43854426feabe27f4ae3965\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-glkrr" podUID="7890deb5-f2f5-4b95-bbc2-b10706e5343d" Mar 25 01:34:37.249622 systemd[1]: run-netns-cni\x2dd24293e4\x2d23dc\x2d7504\x2db6f5\x2ddfcc1c78d1fe.mount: Deactivated successfully. Mar 25 01:34:38.174032 containerd[1510]: time="2025-03-25T01:34:38.173724998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:34:38.264319 containerd[1510]: time="2025-03-25T01:34:38.264186309Z" level=error msg="Failed to destroy network for sandbox \"b4720191f38d5bd740c146ceff33d07075ca53ce9eb5a3dc5a9063d652dd3f4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:38.266263 containerd[1510]: time="2025-03-25T01:34:38.266162313Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4720191f38d5bd740c146ceff33d07075ca53ce9eb5a3dc5a9063d652dd3f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:38.268617 kubelet[2785]: E0325 01:34:38.268501 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4720191f38d5bd740c146ceff33d07075ca53ce9eb5a3dc5a9063d652dd3f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:38.268617 kubelet[2785]: E0325 01:34:38.268606 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4720191f38d5bd740c146ceff33d07075ca53ce9eb5a3dc5a9063d652dd3f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:38.271713 kubelet[2785]: E0325 01:34:38.268642 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4720191f38d5bd740c146ceff33d07075ca53ce9eb5a3dc5a9063d652dd3f4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:38.271713 kubelet[2785]: E0325 01:34:38.268723 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4720191f38d5bd740c146ceff33d07075ca53ce9eb5a3dc5a9063d652dd3f4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" podUID="58b0bc19-665e-4afb-b4c7-59149aa71196" Mar 25 01:34:38.272522 systemd[1]: run-netns-cni\x2d8962f328\x2d1b08\x2db3f7\x2d1e5a\x2d23e73e1df130.mount: Deactivated successfully. Mar 25 01:34:39.173584 containerd[1510]: time="2025-03-25T01:34:39.173511807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:39.256981 containerd[1510]: time="2025-03-25T01:34:39.256880839Z" level=error msg="Failed to destroy network for sandbox \"1a1d69619bc77dd3bfa05ae989bdb7e95b6c3bf945cea513d9ca83f42f3fd6e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:39.259920 containerd[1510]: time="2025-03-25T01:34:39.259836159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1d69619bc77dd3bfa05ae989bdb7e95b6c3bf945cea513d9ca83f42f3fd6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:39.262256 kubelet[2785]: E0325 01:34:39.261623 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1d69619bc77dd3bfa05ae989bdb7e95b6c3bf945cea513d9ca83f42f3fd6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:39.262256 kubelet[2785]: E0325 01:34:39.261727 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1d69619bc77dd3bfa05ae989bdb7e95b6c3bf945cea513d9ca83f42f3fd6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:39.262256 kubelet[2785]: E0325 01:34:39.261767 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a1d69619bc77dd3bfa05ae989bdb7e95b6c3bf945cea513d9ca83f42f3fd6e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:39.262581 kubelet[2785]: E0325 01:34:39.261856 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a1d69619bc77dd3bfa05ae989bdb7e95b6c3bf945cea513d9ca83f42f3fd6e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pkm5c" podUID="0ae54105-2fc2-4712-bcc0-03edef18e454" Mar 25 01:34:39.263352 systemd[1]: run-netns-cni\x2d0d7bc92c\x2d43c8\x2dbbe6\x2d10a2\x2df22fff3ef507.mount: Deactivated successfully. Mar 25 01:34:43.688054 kubelet[2785]: I0325 01:34:43.687424 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:34:45.652723 systemd[1]: Started sshd@8-10.128.0.106:22-139.178.89.65:55058.service - OpenSSH per-connection server daemon (139.178.89.65:55058). Mar 25 01:34:45.884224 kubelet[2785]: I0325 01:34:45.884169 2785 scope.go:117] "RemoveContainer" containerID="4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240" Mar 25 01:34:45.890317 containerd[1510]: time="2025-03-25T01:34:45.890131095Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Mar 25 01:34:45.906614 containerd[1510]: time="2025-03-25T01:34:45.906444210Z" level=info msg="Container 12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:34:45.922024 containerd[1510]: time="2025-03-25T01:34:45.921952102Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\"" Mar 25 01:34:45.923026 containerd[1510]: time="2025-03-25T01:34:45.922987095Z" level=info msg="StartContainer for \"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\"" Mar 25 01:34:45.925686 containerd[1510]: time="2025-03-25T01:34:45.925632986Z" level=info msg="connecting to shim 12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" protocol=ttrpc version=3 Mar 25 01:34:45.964593 systemd[1]: Started cri-containerd-12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206.scope - libcontainer container 12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206. Mar 25 01:34:45.976729 sshd[3936]: Accepted publickey for core from 139.178.89.65 port 55058 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:34:45.978527 sshd-session[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:45.990518 systemd-logind[1481]: New session 8 of user core. Mar 25 01:34:45.997590 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 25 01:34:46.065912 containerd[1510]: time="2025-03-25T01:34:46.065848749Z" level=info msg="StartContainer for \"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\" returns successfully" Mar 25 01:34:46.195456 systemd[1]: cri-containerd-12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206.scope: Deactivated successfully. Mar 25 01:34:46.206400 containerd[1510]: time="2025-03-25T01:34:46.205894510Z" level=info msg="received exit event container_id:\"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\" id:\"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\" pid:3951 exit_status:1 exited_at:{seconds:1742866486 nanos:204883546}" Mar 25 01:34:46.207836 containerd[1510]: time="2025-03-25T01:34:46.207741180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\" id:\"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\" pid:3951 exit_status:1 exited_at:{seconds:1742866486 nanos:204883546}" Mar 25 01:34:46.275490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206-rootfs.mount: Deactivated successfully. Mar 25 01:34:46.400068 sshd[3957]: Connection closed by 139.178.89.65 port 55058 Mar 25 01:34:46.401064 sshd-session[3936]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:46.405984 systemd[1]: sshd@8-10.128.0.106:22-139.178.89.65:55058.service: Deactivated successfully. Mar 25 01:34:46.409235 systemd[1]: session-8.scope: Deactivated successfully. Mar 25 01:34:46.411714 systemd-logind[1481]: Session 8 logged out. Waiting for processes to exit. Mar 25 01:34:46.413205 systemd-logind[1481]: Removed session 8. Mar 25 01:34:46.471782 kubelet[2785]: I0325 01:34:46.471635 2785 scope.go:117] "RemoveContainer" containerID="4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240" Mar 25 01:34:46.472272 kubelet[2785]: I0325 01:34:46.472243 2785 scope.go:117] "RemoveContainer" containerID="12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206" Mar 25 01:34:46.473103 kubelet[2785]: E0325 01:34:46.473063 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9qtgm_calico-system(7220257e-c04d-4718-a529-cbf87b01b9fd)\"" pod="calico-system/calico-node-9qtgm" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" Mar 25 01:34:46.477886 containerd[1510]: time="2025-03-25T01:34:46.477754159Z" level=info msg="RemoveContainer for \"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\"" Mar 25 01:34:46.500102 containerd[1510]: time="2025-03-25T01:34:46.500048438Z" level=info msg="RemoveContainer for \"4e23a13eb0c9f42252a3169edad789c3906556d18b60188c54c1fd6e0f6d9240\" returns successfully" Mar 25 01:34:47.172952 containerd[1510]: time="2025-03-25T01:34:47.172896989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:47.243436 containerd[1510]: time="2025-03-25T01:34:47.243378939Z" level=error msg="Failed to destroy network for sandbox \"6a5877d794e66d527d4ed385965ec59b7569608d90837cbf2f923a544a2e021e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:47.246951 containerd[1510]: time="2025-03-25T01:34:47.246796052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5877d794e66d527d4ed385965ec59b7569608d90837cbf2f923a544a2e021e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:47.247160 kubelet[2785]: E0325 01:34:47.247100 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5877d794e66d527d4ed385965ec59b7569608d90837cbf2f923a544a2e021e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:47.247677 kubelet[2785]: E0325 01:34:47.247184 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5877d794e66d527d4ed385965ec59b7569608d90837cbf2f923a544a2e021e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:47.247677 kubelet[2785]: E0325 01:34:47.247218 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6a5877d794e66d527d4ed385965ec59b7569608d90837cbf2f923a544a2e021e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:47.248778 systemd[1]: run-netns-cni\x2dea1411c2\x2d9f17\x2d6a73\x2d8bd6\x2d665821e29ce4.mount: Deactivated successfully. Mar 25 01:34:47.250732 kubelet[2785]: E0325 01:34:47.248649 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6a5877d794e66d527d4ed385965ec59b7569608d90837cbf2f923a544a2e021e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" Mar 25 01:34:48.174006 containerd[1510]: time="2025-03-25T01:34:48.173523422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:34:48.246169 containerd[1510]: time="2025-03-25T01:34:48.246109221Z" level=error msg="Failed to destroy network for sandbox \"2e39b4b5e180b6ad5902bade5a1c4664c97c7d30334db4aa7c6b77225372791b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:48.250375 containerd[1510]: time="2025-03-25T01:34:48.248844282Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e39b4b5e180b6ad5902bade5a1c4664c97c7d30334db4aa7c6b77225372791b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:48.250099 systemd[1]: run-netns-cni\x2d889fcfae\x2d5904\x2d26ca\x2d3327\x2d93e907d47ee9.mount: Deactivated successfully. Mar 25 01:34:48.250878 kubelet[2785]: E0325 01:34:48.249490 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e39b4b5e180b6ad5902bade5a1c4664c97c7d30334db4aa7c6b77225372791b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:48.250878 kubelet[2785]: E0325 01:34:48.249557 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e39b4b5e180b6ad5902bade5a1c4664c97c7d30334db4aa7c6b77225372791b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:48.250878 kubelet[2785]: E0325 01:34:48.249588 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2e39b4b5e180b6ad5902bade5a1c4664c97c7d30334db4aa7c6b77225372791b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:34:48.251506 kubelet[2785]: E0325 01:34:48.249646 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2e39b4b5e180b6ad5902bade5a1c4664c97c7d30334db4aa7c6b77225372791b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" podUID="7e7c4ede-6986-4ec3-afbd-544ef8b098a2" Mar 25 01:34:50.174053 containerd[1510]: time="2025-03-25T01:34:50.173574884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:50.175115 containerd[1510]: time="2025-03-25T01:34:50.174766266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:50.298949 containerd[1510]: time="2025-03-25T01:34:50.298737268Z" level=error msg="Failed to destroy network for sandbox \"1cc9c489da50c5a50344e404fefa3dc753469dc286f61f41b6b1b5dd01a43f6e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:50.304920 containerd[1510]: time="2025-03-25T01:34:50.304393039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc9c489da50c5a50344e404fefa3dc753469dc286f61f41b6b1b5dd01a43f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:50.304558 systemd[1]: run-netns-cni\x2d5c086131\x2d5bc8\x2d4e63\x2daa46\x2da83f05bf66ac.mount: Deactivated successfully. Mar 25 01:34:50.309263 kubelet[2785]: E0325 01:34:50.307868 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc9c489da50c5a50344e404fefa3dc753469dc286f61f41b6b1b5dd01a43f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:50.309263 kubelet[2785]: E0325 01:34:50.307968 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc9c489da50c5a50344e404fefa3dc753469dc286f61f41b6b1b5dd01a43f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:50.309263 kubelet[2785]: E0325 01:34:50.308007 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cc9c489da50c5a50344e404fefa3dc753469dc286f61f41b6b1b5dd01a43f6e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:34:50.309899 kubelet[2785]: E0325 01:34:50.308077 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cc9c489da50c5a50344e404fefa3dc753469dc286f61f41b6b1b5dd01a43f6e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-glkrr" podUID="7890deb5-f2f5-4b95-bbc2-b10706e5343d" Mar 25 01:34:50.311558 containerd[1510]: time="2025-03-25T01:34:50.311510610Z" level=error msg="Failed to destroy network for sandbox \"0a2448175b5c89d072361c0adc272e342ed30cde2c3148c8ff29175e91433f80\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:50.314313 containerd[1510]: time="2025-03-25T01:34:50.312903409Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2448175b5c89d072361c0adc272e342ed30cde2c3148c8ff29175e91433f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:50.315240 kubelet[2785]: E0325 01:34:50.314973 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2448175b5c89d072361c0adc272e342ed30cde2c3148c8ff29175e91433f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:50.315240 kubelet[2785]: E0325 01:34:50.315045 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2448175b5c89d072361c0adc272e342ed30cde2c3148c8ff29175e91433f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:50.315240 kubelet[2785]: E0325 01:34:50.315077 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a2448175b5c89d072361c0adc272e342ed30cde2c3148c8ff29175e91433f80\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:34:50.315502 kubelet[2785]: E0325 01:34:50.315145 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a2448175b5c89d072361c0adc272e342ed30cde2c3148c8ff29175e91433f80\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:34:50.317210 systemd[1]: run-netns-cni\x2db9a4dd6a\x2d097a\x2d7f33\x2dd051\x2d9877ce346978.mount: Deactivated successfully. Mar 25 01:34:51.173600 containerd[1510]: time="2025-03-25T01:34:51.173531590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:34:51.258551 containerd[1510]: time="2025-03-25T01:34:51.258466075Z" level=error msg="Failed to destroy network for sandbox \"011b98720ab9bc62c75d292669f3087a6f0e40560ccc2b03e4928ceeb07f9b11\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:51.262104 containerd[1510]: time="2025-03-25T01:34:51.262027050Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"011b98720ab9bc62c75d292669f3087a6f0e40560ccc2b03e4928ceeb07f9b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:51.262844 kubelet[2785]: E0325 01:34:51.262777 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011b98720ab9bc62c75d292669f3087a6f0e40560ccc2b03e4928ceeb07f9b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:51.262964 kubelet[2785]: E0325 01:34:51.262874 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011b98720ab9bc62c75d292669f3087a6f0e40560ccc2b03e4928ceeb07f9b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:51.262964 kubelet[2785]: E0325 01:34:51.262910 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"011b98720ab9bc62c75d292669f3087a6f0e40560ccc2b03e4928ceeb07f9b11\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:34:51.263096 kubelet[2785]: E0325 01:34:51.262985 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"011b98720ab9bc62c75d292669f3087a6f0e40560ccc2b03e4928ceeb07f9b11\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" podUID="58b0bc19-665e-4afb-b4c7-59149aa71196" Mar 25 01:34:51.265408 systemd[1]: run-netns-cni\x2d9878112f\x2dcec9\x2d8fd8\x2d2974\x2df84510c4154e.mount: Deactivated successfully. Mar 25 01:34:51.456724 systemd[1]: Started sshd@9-10.128.0.106:22-139.178.89.65:54866.service - OpenSSH per-connection server daemon (139.178.89.65:54866). Mar 25 01:34:51.765777 sshd[4156]: Accepted publickey for core from 139.178.89.65 port 54866 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:34:51.767874 sshd-session[4156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:51.774755 systemd-logind[1481]: New session 9 of user core. Mar 25 01:34:51.782589 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 25 01:34:52.064328 sshd[4158]: Connection closed by 139.178.89.65 port 54866 Mar 25 01:34:52.065835 sshd-session[4156]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:52.071193 systemd[1]: sshd@9-10.128.0.106:22-139.178.89.65:54866.service: Deactivated successfully. Mar 25 01:34:52.074776 systemd[1]: session-9.scope: Deactivated successfully. Mar 25 01:34:52.077407 systemd-logind[1481]: Session 9 logged out. Waiting for processes to exit. Mar 25 01:34:52.079224 systemd-logind[1481]: Removed session 9. Mar 25 01:34:54.173628 containerd[1510]: time="2025-03-25T01:34:54.173007250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,}" Mar 25 01:34:54.250137 containerd[1510]: time="2025-03-25T01:34:54.250028594Z" level=error msg="Failed to destroy network for sandbox \"0604263ea3df9b31eb09d44c175be91709e0478cf7cc7ea762e7afb569eb4bd1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:54.252547 containerd[1510]: time="2025-03-25T01:34:54.252473667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0604263ea3df9b31eb09d44c175be91709e0478cf7cc7ea762e7afb569eb4bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:54.253760 kubelet[2785]: E0325 01:34:54.253065 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0604263ea3df9b31eb09d44c175be91709e0478cf7cc7ea762e7afb569eb4bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:54.253760 kubelet[2785]: E0325 01:34:54.253167 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0604263ea3df9b31eb09d44c175be91709e0478cf7cc7ea762e7afb569eb4bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:54.253760 kubelet[2785]: E0325 01:34:54.253206 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0604263ea3df9b31eb09d44c175be91709e0478cf7cc7ea762e7afb569eb4bd1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:34:54.255559 kubelet[2785]: E0325 01:34:54.253281 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0604263ea3df9b31eb09d44c175be91709e0478cf7cc7ea762e7afb569eb4bd1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pkm5c" podUID="0ae54105-2fc2-4712-bcc0-03edef18e454" Mar 25 01:34:54.256819 systemd[1]: run-netns-cni\x2dd32f3415\x2d6f59\x2d77fb\x2d8ada\x2d90b55b1cc9a9.mount: Deactivated successfully. Mar 25 01:34:57.118961 systemd[1]: Started sshd@10-10.128.0.106:22-139.178.89.65:54878.service - OpenSSH per-connection server daemon (139.178.89.65:54878). Mar 25 01:34:57.423757 sshd[4199]: Accepted publickey for core from 139.178.89.65 port 54878 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:34:57.425670 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:57.432170 systemd-logind[1481]: New session 10 of user core. Mar 25 01:34:57.437510 systemd[1]: Started session-10.scope - Session 10 of User core. Mar 25 01:34:57.714453 sshd[4201]: Connection closed by 139.178.89.65 port 54878 Mar 25 01:34:57.715376 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:57.719917 systemd[1]: sshd@10-10.128.0.106:22-139.178.89.65:54878.service: Deactivated successfully. Mar 25 01:34:57.723279 systemd[1]: session-10.scope: Deactivated successfully. Mar 25 01:34:57.726657 systemd-logind[1481]: Session 10 logged out. Waiting for processes to exit. Mar 25 01:34:57.729169 systemd-logind[1481]: Removed session 10. Mar 25 01:34:57.769228 systemd[1]: Started sshd@11-10.128.0.106:22-139.178.89.65:54894.service - OpenSSH per-connection server daemon (139.178.89.65:54894). Mar 25 01:34:58.073262 sshd[4214]: Accepted publickey for core from 139.178.89.65 port 54894 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:34:58.075248 sshd-session[4214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:58.081260 systemd-logind[1481]: New session 11 of user core. Mar 25 01:34:58.087526 systemd[1]: Started session-11.scope - Session 11 of User core. Mar 25 01:34:58.401285 sshd[4216]: Connection closed by 139.178.89.65 port 54894 Mar 25 01:34:58.402580 sshd-session[4214]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:58.408327 systemd[1]: sshd@11-10.128.0.106:22-139.178.89.65:54894.service: Deactivated successfully. Mar 25 01:34:58.411316 systemd[1]: session-11.scope: Deactivated successfully. Mar 25 01:34:58.413630 systemd-logind[1481]: Session 11 logged out. Waiting for processes to exit. Mar 25 01:34:58.415264 systemd-logind[1481]: Removed session 11. Mar 25 01:34:58.455073 systemd[1]: Started sshd@12-10.128.0.106:22-139.178.89.65:47334.service - OpenSSH per-connection server daemon (139.178.89.65:47334). Mar 25 01:34:58.753659 sshd[4226]: Accepted publickey for core from 139.178.89.65 port 47334 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:34:58.756144 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:34:58.766853 systemd-logind[1481]: New session 12 of user core. Mar 25 01:34:58.772507 systemd[1]: Started session-12.scope - Session 12 of User core. Mar 25 01:34:59.043957 sshd[4228]: Connection closed by 139.178.89.65 port 47334 Mar 25 01:34:59.045051 sshd-session[4226]: pam_unix(sshd:session): session closed for user core Mar 25 01:34:59.049797 systemd[1]: sshd@12-10.128.0.106:22-139.178.89.65:47334.service: Deactivated successfully. Mar 25 01:34:59.052858 systemd[1]: session-12.scope: Deactivated successfully. Mar 25 01:34:59.055159 systemd-logind[1481]: Session 12 logged out. Waiting for processes to exit. Mar 25 01:34:59.056962 systemd-logind[1481]: Removed session 12. Mar 25 01:34:59.173832 containerd[1510]: time="2025-03-25T01:34:59.173751194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,}" Mar 25 01:34:59.243559 containerd[1510]: time="2025-03-25T01:34:59.243499030Z" level=error msg="Failed to destroy network for sandbox \"a5e08e0f9626823a0606f208ebf0cd7445e0ab68038e3a7ec095f418ae4a5a35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:59.245540 containerd[1510]: time="2025-03-25T01:34:59.245396970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e08e0f9626823a0606f208ebf0cd7445e0ab68038e3a7ec095f418ae4a5a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:59.246555 kubelet[2785]: E0325 01:34:59.246486 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e08e0f9626823a0606f208ebf0cd7445e0ab68038e3a7ec095f418ae4a5a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:34:59.247015 kubelet[2785]: E0325 01:34:59.246588 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e08e0f9626823a0606f208ebf0cd7445e0ab68038e3a7ec095f418ae4a5a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:59.247015 kubelet[2785]: E0325 01:34:59.246627 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5e08e0f9626823a0606f208ebf0cd7445e0ab68038e3a7ec095f418ae4a5a35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:34:59.247015 kubelet[2785]: E0325 01:34:59.246696 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5e08e0f9626823a0606f208ebf0cd7445e0ab68038e3a7ec095f418ae4a5a35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" Mar 25 01:34:59.249646 systemd[1]: run-netns-cni\x2d3b9b60e4\x2d2434\x2d4d0e\x2d13a7\x2d351655614d44.mount: Deactivated successfully. Mar 25 01:35:00.174338 kubelet[2785]: I0325 01:35:00.173346 2785 scope.go:117] "RemoveContainer" containerID="12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206" Mar 25 01:35:00.174634 kubelet[2785]: E0325 01:35:00.174271 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-9qtgm_calico-system(7220257e-c04d-4718-a529-cbf87b01b9fd)\"" pod="calico-system/calico-node-9qtgm" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" Mar 25 01:35:02.174339 containerd[1510]: time="2025-03-25T01:35:02.173906420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:35:02.175905 containerd[1510]: time="2025-03-25T01:35:02.175475436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:02.295559 containerd[1510]: time="2025-03-25T01:35:02.295476306Z" level=error msg="Failed to destroy network for sandbox \"dc907fa4509ea731ea76e2979f69e5a8d6acc084a933363b727b9d3027933cac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:02.295559 containerd[1510]: time="2025-03-25T01:35:02.295475435Z" level=error msg="Failed to destroy network for sandbox \"ed96154c633c0a29c5a09fb84deb1c6edb6e1f564dd5f19bb4b834b25cc6e746\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:02.297078 containerd[1510]: time="2025-03-25T01:35:02.296964092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc907fa4509ea731ea76e2979f69e5a8d6acc084a933363b727b9d3027933cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:02.299064 kubelet[2785]: E0325 01:35:02.297885 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc907fa4509ea731ea76e2979f69e5a8d6acc084a933363b727b9d3027933cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:02.299064 kubelet[2785]: E0325 01:35:02.297998 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc907fa4509ea731ea76e2979f69e5a8d6acc084a933363b727b9d3027933cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:35:02.299064 kubelet[2785]: E0325 01:35:02.298037 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc907fa4509ea731ea76e2979f69e5a8d6acc084a933363b727b9d3027933cac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:35:02.299785 kubelet[2785]: E0325 01:35:02.298108 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc907fa4509ea731ea76e2979f69e5a8d6acc084a933363b727b9d3027933cac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" podUID="58b0bc19-665e-4afb-b4c7-59149aa71196" Mar 25 01:35:02.300866 containerd[1510]: time="2025-03-25T01:35:02.300710764Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96154c633c0a29c5a09fb84deb1c6edb6e1f564dd5f19bb4b834b25cc6e746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:02.303355 kubelet[2785]: E0325 01:35:02.301492 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96154c633c0a29c5a09fb84deb1c6edb6e1f564dd5f19bb4b834b25cc6e746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:02.303355 kubelet[2785]: E0325 01:35:02.301564 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96154c633c0a29c5a09fb84deb1c6edb6e1f564dd5f19bb4b834b25cc6e746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:35:02.303355 kubelet[2785]: E0325 01:35:02.301596 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ed96154c633c0a29c5a09fb84deb1c6edb6e1f564dd5f19bb4b834b25cc6e746\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:35:02.303600 kubelet[2785]: E0325 01:35:02.301656 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ed96154c633c0a29c5a09fb84deb1c6edb6e1f564dd5f19bb4b834b25cc6e746\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-glkrr" podUID="7890deb5-f2f5-4b95-bbc2-b10706e5343d" Mar 25 01:35:02.304254 systemd[1]: run-netns-cni\x2d45b7903f\x2de194\x2de63c\x2d23c7\x2dbddacae192b6.mount: Deactivated successfully. Mar 25 01:35:02.304470 systemd[1]: run-netns-cni\x2d344c03d8\x2d4853\x2d5aae\x2d498f\x2d95ca9866e286.mount: Deactivated successfully. Mar 25 01:35:04.099182 systemd[1]: Started sshd@13-10.128.0.106:22-139.178.89.65:47336.service - OpenSSH per-connection server daemon (139.178.89.65:47336). Mar 25 01:35:04.174751 containerd[1510]: time="2025-03-25T01:35:04.174691523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,}" Mar 25 01:35:04.175993 containerd[1510]: time="2025-03-25T01:35:04.175496400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:35:04.326660 containerd[1510]: time="2025-03-25T01:35:04.323923715Z" level=error msg="Failed to destroy network for sandbox \"56271e2bd3ea02cb24b5058cfad0784da7949b63159b67e815f747322d2fdfb3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:04.326894 containerd[1510]: time="2025-03-25T01:35:04.326741855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"56271e2bd3ea02cb24b5058cfad0784da7949b63159b67e815f747322d2fdfb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:04.329230 kubelet[2785]: E0325 01:35:04.328540 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56271e2bd3ea02cb24b5058cfad0784da7949b63159b67e815f747322d2fdfb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:04.329230 kubelet[2785]: E0325 01:35:04.328637 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56271e2bd3ea02cb24b5058cfad0784da7949b63159b67e815f747322d2fdfb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:35:04.329230 kubelet[2785]: E0325 01:35:04.328675 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56271e2bd3ea02cb24b5058cfad0784da7949b63159b67e815f747322d2fdfb3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:35:04.337174 kubelet[2785]: E0325 01:35:04.328750 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56271e2bd3ea02cb24b5058cfad0784da7949b63159b67e815f747322d2fdfb3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" podUID="7e7c4ede-6986-4ec3-afbd-544ef8b098a2" Mar 25 01:35:04.337905 systemd[1]: run-netns-cni\x2dc557b6f0\x2d5fb9\x2d97f5\x2df142\x2d0f79aac97de4.mount: Deactivated successfully. Mar 25 01:35:04.350820 containerd[1510]: time="2025-03-25T01:35:04.350617509Z" level=error msg="Failed to destroy network for sandbox \"39d885f24ee42f89aecbd5bc7f22f1b3fd4dabfda69c16e27003a18bf748623a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:04.354734 containerd[1510]: time="2025-03-25T01:35:04.354630326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d885f24ee42f89aecbd5bc7f22f1b3fd4dabfda69c16e27003a18bf748623a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:04.355062 kubelet[2785]: E0325 01:35:04.354989 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d885f24ee42f89aecbd5bc7f22f1b3fd4dabfda69c16e27003a18bf748623a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:04.355148 kubelet[2785]: E0325 01:35:04.355093 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d885f24ee42f89aecbd5bc7f22f1b3fd4dabfda69c16e27003a18bf748623a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:35:04.355220 kubelet[2785]: E0325 01:35:04.355140 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"39d885f24ee42f89aecbd5bc7f22f1b3fd4dabfda69c16e27003a18bf748623a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:35:04.355273 kubelet[2785]: E0325 01:35:04.355212 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"39d885f24ee42f89aecbd5bc7f22f1b3fd4dabfda69c16e27003a18bf748623a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:35:04.356734 systemd[1]: run-netns-cni\x2d834482ac\x2d2e32\x2d2324\x2d89ed\x2d67fa4535f1b7.mount: Deactivated successfully. Mar 25 01:35:04.416459 sshd[4334]: Accepted publickey for core from 139.178.89.65 port 47336 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:04.418666 sshd-session[4334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:04.425535 systemd-logind[1481]: New session 13 of user core. Mar 25 01:35:04.431516 systemd[1]: Started session-13.scope - Session 13 of User core. Mar 25 01:35:04.716198 sshd[4399]: Connection closed by 139.178.89.65 port 47336 Mar 25 01:35:04.717387 sshd-session[4334]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:04.723374 systemd[1]: sshd@13-10.128.0.106:22-139.178.89.65:47336.service: Deactivated successfully. Mar 25 01:35:04.727836 systemd[1]: session-13.scope: Deactivated successfully. Mar 25 01:35:04.730167 systemd-logind[1481]: Session 13 logged out. Waiting for processes to exit. Mar 25 01:35:04.732082 systemd-logind[1481]: Removed session 13. Mar 25 01:35:06.173775 containerd[1510]: time="2025-03-25T01:35:06.173414359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:06.251774 containerd[1510]: time="2025-03-25T01:35:06.251700211Z" level=error msg="Failed to destroy network for sandbox \"9d0c26fc0a165bcf91ad264837f126eef0d14a2625fa56f018b0877125263680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:06.253468 containerd[1510]: time="2025-03-25T01:35:06.253385631Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0c26fc0a165bcf91ad264837f126eef0d14a2625fa56f018b0877125263680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:06.255335 kubelet[2785]: E0325 01:35:06.254031 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0c26fc0a165bcf91ad264837f126eef0d14a2625fa56f018b0877125263680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:06.255335 kubelet[2785]: E0325 01:35:06.254125 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0c26fc0a165bcf91ad264837f126eef0d14a2625fa56f018b0877125263680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:35:06.255335 kubelet[2785]: E0325 01:35:06.254178 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9d0c26fc0a165bcf91ad264837f126eef0d14a2625fa56f018b0877125263680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:35:06.255967 kubelet[2785]: E0325 01:35:06.254250 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9d0c26fc0a165bcf91ad264837f126eef0d14a2625fa56f018b0877125263680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pkm5c" podUID="0ae54105-2fc2-4712-bcc0-03edef18e454" Mar 25 01:35:06.258198 systemd[1]: run-netns-cni\x2dc5c99f85\x2d75d0\x2d5fab\x2d51aa\x2d8b9687438b6b.mount: Deactivated successfully. Mar 25 01:35:09.772717 systemd[1]: Started sshd@14-10.128.0.106:22-139.178.89.65:48436.service - OpenSSH per-connection server daemon (139.178.89.65:48436). Mar 25 01:35:10.072321 sshd[4442]: Accepted publickey for core from 139.178.89.65 port 48436 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:10.074387 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:10.081264 systemd-logind[1481]: New session 14 of user core. Mar 25 01:35:10.091575 systemd[1]: Started session-14.scope - Session 14 of User core. Mar 25 01:35:10.175227 containerd[1510]: time="2025-03-25T01:35:10.174479378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,}" Mar 25 01:35:10.263059 containerd[1510]: time="2025-03-25T01:35:10.262947156Z" level=error msg="Failed to destroy network for sandbox \"b91cf4d7a03fdfa542e5ad8b391725f1e44f5a44db387efc5a8b12d67b4f409a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:10.265396 containerd[1510]: time="2025-03-25T01:35:10.265000800Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b59f6c75b-4wn56,Uid:ebc6596e-ec52-4fcc-a122-1403cd0be893,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91cf4d7a03fdfa542e5ad8b391725f1e44f5a44db387efc5a8b12d67b4f409a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:10.266667 kubelet[2785]: E0325 01:35:10.266595 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91cf4d7a03fdfa542e5ad8b391725f1e44f5a44db387efc5a8b12d67b4f409a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:10.267172 kubelet[2785]: E0325 01:35:10.266701 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91cf4d7a03fdfa542e5ad8b391725f1e44f5a44db387efc5a8b12d67b4f409a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:35:10.267172 kubelet[2785]: E0325 01:35:10.266734 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b91cf4d7a03fdfa542e5ad8b391725f1e44f5a44db387efc5a8b12d67b4f409a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" Mar 25 01:35:10.267172 kubelet[2785]: E0325 01:35:10.266805 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b59f6c75b-4wn56_calico-system(ebc6596e-ec52-4fcc-a122-1403cd0be893)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b91cf4d7a03fdfa542e5ad8b391725f1e44f5a44db387efc5a8b12d67b4f409a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b59f6c75b-4wn56" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" Mar 25 01:35:10.272208 systemd[1]: run-netns-cni\x2d2df113b3\x2d8c01\x2d621c\x2dd552\x2d552e53a3e445.mount: Deactivated successfully. Mar 25 01:35:10.403078 sshd[4444]: Connection closed by 139.178.89.65 port 48436 Mar 25 01:35:10.404696 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:10.411540 systemd[1]: sshd@14-10.128.0.106:22-139.178.89.65:48436.service: Deactivated successfully. Mar 25 01:35:10.415138 systemd[1]: session-14.scope: Deactivated successfully. Mar 25 01:35:10.416411 systemd-logind[1481]: Session 14 logged out. Waiting for processes to exit. Mar 25 01:35:10.418059 systemd-logind[1481]: Removed session 14. Mar 25 01:35:13.173587 containerd[1510]: time="2025-03-25T01:35:13.173494492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:13.246410 containerd[1510]: time="2025-03-25T01:35:13.246340995Z" level=error msg="Failed to destroy network for sandbox \"f29fe45d62446dbef91916aaaf0dff00f120a1af91c88ac73f708a6cbbf29186\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:13.251338 containerd[1510]: time="2025-03-25T01:35:13.249099589Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29fe45d62446dbef91916aaaf0dff00f120a1af91c88ac73f708a6cbbf29186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:13.251589 kubelet[2785]: E0325 01:35:13.249547 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29fe45d62446dbef91916aaaf0dff00f120a1af91c88ac73f708a6cbbf29186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:13.251589 kubelet[2785]: E0325 01:35:13.249663 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29fe45d62446dbef91916aaaf0dff00f120a1af91c88ac73f708a6cbbf29186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:35:13.251589 kubelet[2785]: E0325 01:35:13.249700 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f29fe45d62446dbef91916aaaf0dff00f120a1af91c88ac73f708a6cbbf29186\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-glkrr" Mar 25 01:35:13.252184 kubelet[2785]: E0325 01:35:13.249779 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-glkrr_kube-system(7890deb5-f2f5-4b95-bbc2-b10706e5343d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f29fe45d62446dbef91916aaaf0dff00f120a1af91c88ac73f708a6cbbf29186\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-glkrr" podUID="7890deb5-f2f5-4b95-bbc2-b10706e5343d" Mar 25 01:35:13.252873 systemd[1]: run-netns-cni\x2d42ec9cd7\x2d3cd8\x2dfe18\x2de8c5\x2d0139d796406e.mount: Deactivated successfully. Mar 25 01:35:14.173830 kubelet[2785]: I0325 01:35:14.173762 2785 scope.go:117] "RemoveContainer" containerID="12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206" Mar 25 01:35:14.184404 containerd[1510]: time="2025-03-25T01:35:14.184303624Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for container &ContainerMetadata{Name:calico-node,Attempt:3,}" Mar 25 01:35:14.209803 containerd[1510]: time="2025-03-25T01:35:14.208133907Z" level=info msg="Container 53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:14.236350 containerd[1510]: time="2025-03-25T01:35:14.236230300Z" level=info msg="CreateContainer within sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" for &ContainerMetadata{Name:calico-node,Attempt:3,} returns container id \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\"" Mar 25 01:35:14.238547 containerd[1510]: time="2025-03-25T01:35:14.238498853Z" level=info msg="StartContainer for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\"" Mar 25 01:35:14.241543 containerd[1510]: time="2025-03-25T01:35:14.241481361Z" level=info msg="connecting to shim 53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982" address="unix:///run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9" protocol=ttrpc version=3 Mar 25 01:35:14.312162 systemd[1]: Started cri-containerd-53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982.scope - libcontainer container 53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982. Mar 25 01:35:14.420765 containerd[1510]: time="2025-03-25T01:35:14.420681302Z" level=info msg="StartContainer for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" returns successfully" Mar 25 01:35:14.556445 systemd[1]: cri-containerd-53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982.scope: Deactivated successfully. Mar 25 01:35:14.570804 containerd[1510]: time="2025-03-25T01:35:14.570745903Z" level=info msg="received exit event container_id:\"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" id:\"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" pid:4531 exit_status:1 exited_at:{seconds:1742866514 nanos:568699598}" Mar 25 01:35:14.572367 containerd[1510]: time="2025-03-25T01:35:14.572323353Z" level=info msg="TaskExit event in podsandbox handler container_id:\"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" id:\"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" pid:4531 exit_status:1 exited_at:{seconds:1742866514 nanos:568699598}" Mar 25 01:35:14.623403 kubelet[2785]: I0325 01:35:14.623178 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-9qtgm" podStartSLOduration=45.286963337 podStartE2EDuration="1m1.623119134s" podCreationTimestamp="2025-03-25 01:34:13 +0000 UTC" firstStartedPulling="2025-03-25 01:34:14.285740324 +0000 UTC m=+24.271615432" lastFinishedPulling="2025-03-25 01:34:30.621896097 +0000 UTC m=+40.607771229" observedRunningTime="2025-03-25 01:34:31.424841094 +0000 UTC m=+41.410716225" watchObservedRunningTime="2025-03-25 01:35:14.623119134 +0000 UTC m=+84.608994289" Mar 25 01:35:14.644574 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982-rootfs.mount: Deactivated successfully. Mar 25 01:35:14.655264 containerd[1510]: time="2025-03-25T01:35:14.655047942Z" level=error msg="failed sending message on channel" error="write unix /run/containerd/s/a25cb26a8308afcefc8161d9fb05eed116b6c1f319644146ab6aac804912c4e9->@: write: broken pipe" runtime=io.containerd.runc.v2 Mar 25 01:35:14.656590 containerd[1510]: time="2025-03-25T01:35:14.655530239Z" level=error msg="ExecSync for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" failed" error="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5b1b6b25c45c988ee50045a92a2c42e5693a81ccdf4c22d9741a5ff811adf039\": ttrpc: closed" Mar 25 01:35:14.657020 kubelet[2785]: E0325 01:35:14.656226 2785 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"5b1b6b25c45c988ee50045a92a2c42e5693a81ccdf4c22d9741a5ff811adf039\": ttrpc: closed" containerID="53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 25 01:35:14.658214 containerd[1510]: time="2025-03-25T01:35:14.658161966Z" level=error msg="ExecSync for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Mar 25 01:35:14.658765 kubelet[2785]: E0325 01:35:14.658718 2785 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 25 01:35:14.659146 containerd[1510]: time="2025-03-25T01:35:14.659101066Z" level=error msg="ExecSync for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" failed" error="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" Mar 25 01:35:14.659429 kubelet[2785]: E0325 01:35:14.659387 2785 remote_runtime.go:496] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: container is in CONTAINER_EXITED state" containerID="53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Mar 25 01:35:15.003244 containerd[1510]: time="2025-03-25T01:35:15.002563476Z" level=info msg="StopContainer for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" with timeout 300 (s)" Mar 25 01:35:15.003244 containerd[1510]: time="2025-03-25T01:35:15.003208958Z" level=info msg="Stop container \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" with signal terminated" Mar 25 01:35:15.055913 systemd[1]: cri-containerd-443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb.scope: Deactivated successfully. Mar 25 01:35:15.060746 containerd[1510]: time="2025-03-25T01:35:15.060555692Z" level=info msg="received exit event container_id:\"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" id:\"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" pid:3379 exit_status:1 exited_at:{seconds:1742866515 nanos:58658859}" Mar 25 01:35:15.066848 containerd[1510]: time="2025-03-25T01:35:15.066319561Z" level=info msg="TaskExit event in podsandbox handler container_id:\"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" id:\"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" pid:3379 exit_status:1 exited_at:{seconds:1742866515 nanos:58658859}" Mar 25 01:35:15.110086 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb-rootfs.mount: Deactivated successfully. Mar 25 01:35:15.119378 containerd[1510]: time="2025-03-25T01:35:15.119323265Z" level=info msg="StopContainer for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" returns successfully" Mar 25 01:35:15.120958 containerd[1510]: time="2025-03-25T01:35:15.120561461Z" level=info msg="StopPodSandbox for \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\"" Mar 25 01:35:15.120958 containerd[1510]: time="2025-03-25T01:35:15.120660550Z" level=info msg="Container to stop \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:15.132825 systemd[1]: cri-containerd-30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92.scope: Deactivated successfully. Mar 25 01:35:15.138043 containerd[1510]: time="2025-03-25T01:35:15.137898420Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" id:\"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" pid:3214 exit_status:137 exited_at:{seconds:1742866515 nanos:136965420}" Mar 25 01:35:15.178785 containerd[1510]: time="2025-03-25T01:35:15.177987301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:35:15.192610 containerd[1510]: time="2025-03-25T01:35:15.192564575Z" level=info msg="shim disconnected" id=30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92 namespace=k8s.io Mar 25 01:35:15.194014 containerd[1510]: time="2025-03-25T01:35:15.193656484Z" level=warning msg="cleaning up after shim disconnected" id=30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92 namespace=k8s.io Mar 25 01:35:15.194014 containerd[1510]: time="2025-03-25T01:35:15.193688614Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:35:15.221413 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92-rootfs.mount: Deactivated successfully. Mar 25 01:35:15.247103 containerd[1510]: time="2025-03-25T01:35:15.245096399Z" level=info msg="received exit event sandbox_id:\"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" exit_status:137 exited_at:{seconds:1742866515 nanos:136965420}" Mar 25 01:35:15.247103 containerd[1510]: time="2025-03-25T01:35:15.246989229Z" level=info msg="TearDown network for sandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" successfully" Mar 25 01:35:15.251346 containerd[1510]: time="2025-03-25T01:35:15.249166762Z" level=info msg="StopPodSandbox for \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" returns successfully" Mar 25 01:35:15.254020 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92-shm.mount: Deactivated successfully. Mar 25 01:35:15.352145 containerd[1510]: time="2025-03-25T01:35:15.352066122Z" level=error msg="Failed to destroy network for sandbox \"3c3f97c5d279b54913a4b8fd11ab5f93092192f30522959b4389d4987d9f167a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:15.357199 containerd[1510]: time="2025-03-25T01:35:15.357120044Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3f97c5d279b54913a4b8fd11ab5f93092192f30522959b4389d4987d9f167a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:15.357731 kubelet[2785]: E0325 01:35:15.357591 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3f97c5d279b54913a4b8fd11ab5f93092192f30522959b4389d4987d9f167a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:15.357731 kubelet[2785]: E0325 01:35:15.357676 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3f97c5d279b54913a4b8fd11ab5f93092192f30522959b4389d4987d9f167a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:35:15.357731 kubelet[2785]: E0325 01:35:15.357712 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c3f97c5d279b54913a4b8fd11ab5f93092192f30522959b4389d4987d9f167a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" Mar 25 01:35:15.357982 kubelet[2785]: E0325 01:35:15.357783 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-xcvmz_calico-apiserver(58b0bc19-665e-4afb-b4c7-59149aa71196)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c3f97c5d279b54913a4b8fd11ab5f93092192f30522959b4389d4987d9f167a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" podUID="58b0bc19-665e-4afb-b4c7-59149aa71196" Mar 25 01:35:15.358334 systemd[1]: run-netns-cni\x2d152cb1e5\x2d7ee0\x2d1c7f\x2dca57\x2d9ce9fc6d4e22.mount: Deactivated successfully. Mar 25 01:35:15.377533 kubelet[2785]: I0325 01:35:15.376502 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66b18176-004d-49de-83b6-365527954d1a-typha-certs\") pod \"66b18176-004d-49de-83b6-365527954d1a\" (UID: \"66b18176-004d-49de-83b6-365527954d1a\") " Mar 25 01:35:15.377533 kubelet[2785]: I0325 01:35:15.376568 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b18176-004d-49de-83b6-365527954d1a-tigera-ca-bundle\") pod \"66b18176-004d-49de-83b6-365527954d1a\" (UID: \"66b18176-004d-49de-83b6-365527954d1a\") " Mar 25 01:35:15.377533 kubelet[2785]: I0325 01:35:15.376604 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-njqjg\" (UniqueName: \"kubernetes.io/projected/66b18176-004d-49de-83b6-365527954d1a-kube-api-access-njqjg\") pod \"66b18176-004d-49de-83b6-365527954d1a\" (UID: \"66b18176-004d-49de-83b6-365527954d1a\") " Mar 25 01:35:15.387998 systemd[1]: var-lib-kubelet-pods-66b18176\x2d004d\x2d49de\x2d83b6\x2d365527954d1a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnjqjg.mount: Deactivated successfully. Mar 25 01:35:15.388557 kubelet[2785]: I0325 01:35:15.388512 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66b18176-004d-49de-83b6-365527954d1a-kube-api-access-njqjg" (OuterVolumeSpecName: "kube-api-access-njqjg") pod "66b18176-004d-49de-83b6-365527954d1a" (UID: "66b18176-004d-49de-83b6-365527954d1a"). InnerVolumeSpecName "kube-api-access-njqjg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:35:15.399325 kubelet[2785]: I0325 01:35:15.397954 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/66b18176-004d-49de-83b6-365527954d1a-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "66b18176-004d-49de-83b6-365527954d1a" (UID: "66b18176-004d-49de-83b6-365527954d1a"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:35:15.400011 systemd[1]: var-lib-kubelet-pods-66b18176\x2d004d\x2d49de\x2d83b6\x2d365527954d1a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Mar 25 01:35:15.401905 kubelet[2785]: I0325 01:35:15.400042 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66b18176-004d-49de-83b6-365527954d1a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "66b18176-004d-49de-83b6-365527954d1a" (UID: "66b18176-004d-49de-83b6-365527954d1a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:35:15.400195 systemd[1]: var-lib-kubelet-pods-66b18176\x2d004d\x2d49de\x2d83b6\x2d365527954d1a-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Mar 25 01:35:15.471006 systemd[1]: Started sshd@15-10.128.0.106:22-139.178.89.65:48450.service - OpenSSH per-connection server daemon (139.178.89.65:48450). Mar 25 01:35:15.479071 kubelet[2785]: I0325 01:35:15.478445 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc6596e-ec52-4fcc-a122-1403cd0be893-tigera-ca-bundle\") pod \"ebc6596e-ec52-4fcc-a122-1403cd0be893\" (UID: \"ebc6596e-ec52-4fcc-a122-1403cd0be893\") " Mar 25 01:35:15.479071 kubelet[2785]: I0325 01:35:15.478507 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7l7k\" (UniqueName: \"kubernetes.io/projected/ebc6596e-ec52-4fcc-a122-1403cd0be893-kube-api-access-v7l7k\") pod \"ebc6596e-ec52-4fcc-a122-1403cd0be893\" (UID: \"ebc6596e-ec52-4fcc-a122-1403cd0be893\") " Mar 25 01:35:15.479071 kubelet[2785]: I0325 01:35:15.478588 2785 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/66b18176-004d-49de-83b6-365527954d1a-typha-certs\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:15.479071 kubelet[2785]: I0325 01:35:15.478606 2785 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/66b18176-004d-49de-83b6-365527954d1a-tigera-ca-bundle\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:15.479071 kubelet[2785]: I0325 01:35:15.478624 2785 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-njqjg\" (UniqueName: \"kubernetes.io/projected/66b18176-004d-49de-83b6-365527954d1a-kube-api-access-njqjg\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:15.484729 kubelet[2785]: I0325 01:35:15.484685 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ebc6596e-ec52-4fcc-a122-1403cd0be893-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ebc6596e-ec52-4fcc-a122-1403cd0be893" (UID: "ebc6596e-ec52-4fcc-a122-1403cd0be893"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:35:15.488570 kubelet[2785]: I0325 01:35:15.488405 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ebc6596e-ec52-4fcc-a122-1403cd0be893-kube-api-access-v7l7k" (OuterVolumeSpecName: "kube-api-access-v7l7k") pod "ebc6596e-ec52-4fcc-a122-1403cd0be893" (UID: "ebc6596e-ec52-4fcc-a122-1403cd0be893"). InnerVolumeSpecName "kube-api-access-v7l7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:35:15.580381 kubelet[2785]: I0325 01:35:15.579148 2785 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ebc6596e-ec52-4fcc-a122-1403cd0be893-tigera-ca-bundle\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:15.580381 kubelet[2785]: I0325 01:35:15.579196 2785 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v7l7k\" (UniqueName: \"kubernetes.io/projected/ebc6596e-ec52-4fcc-a122-1403cd0be893-kube-api-access-v7l7k\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:15.603601 kubelet[2785]: I0325 01:35:15.603530 2785 scope.go:117] "RemoveContainer" containerID="443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb" Mar 25 01:35:15.607754 containerd[1510]: time="2025-03-25T01:35:15.607641880Z" level=info msg="RemoveContainer for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\"" Mar 25 01:35:15.614926 systemd[1]: Removed slice kubepods-besteffort-pod66b18176_004d_49de_83b6_365527954d1a.slice - libcontainer container kubepods-besteffort-pod66b18176_004d_49de_83b6_365527954d1a.slice. Mar 25 01:35:15.618630 containerd[1510]: time="2025-03-25T01:35:15.618547248Z" level=info msg="RemoveContainer for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" returns successfully" Mar 25 01:35:15.619014 kubelet[2785]: I0325 01:35:15.618958 2785 scope.go:117] "RemoveContainer" containerID="443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb" Mar 25 01:35:15.619575 containerd[1510]: time="2025-03-25T01:35:15.619283332Z" level=error msg="ContainerStatus for \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\": not found" Mar 25 01:35:15.620028 kubelet[2785]: E0325 01:35:15.619991 2785 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\": not found" containerID="443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb" Mar 25 01:35:15.620229 kubelet[2785]: I0325 01:35:15.620038 2785 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb"} err="failed to get container status \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\": rpc error: code = NotFound desc = an error occurred when try to find container \"443ffac60b7b97741f59fa2af8ed431f707c9a797dde3e52b66c7e3e08e17bbb\": not found" Mar 25 01:35:15.623554 kubelet[2785]: I0325 01:35:15.622112 2785 scope.go:117] "RemoveContainer" containerID="12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206" Mar 25 01:35:15.624344 kubelet[2785]: I0325 01:35:15.624320 2785 scope.go:117] "RemoveContainer" containerID="53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982" Mar 25 01:35:15.625583 kubelet[2785]: E0325 01:35:15.625518 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 40s restarting failed container=calico-node pod=calico-node-9qtgm_calico-system(7220257e-c04d-4718-a529-cbf87b01b9fd)\"" pod="calico-system/calico-node-9qtgm" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" Mar 25 01:35:15.632594 containerd[1510]: time="2025-03-25T01:35:15.632272255Z" level=info msg="RemoveContainer for \"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\"" Mar 25 01:35:15.633211 systemd[1]: Removed slice kubepods-besteffort-podebc6596e_ec52_4fcc_a122_1403cd0be893.slice - libcontainer container kubepods-besteffort-podebc6596e_ec52_4fcc_a122_1403cd0be893.slice. Mar 25 01:35:15.654394 containerd[1510]: time="2025-03-25T01:35:15.654082950Z" level=info msg="RemoveContainer for \"12d47b531e52213ce1e44b39178b6891e8d576e6c81244bfed2ebeb81123e206\" returns successfully" Mar 25 01:35:15.795124 sshd[4660]: Accepted publickey for core from 139.178.89.65 port 48450 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:15.797126 sshd-session[4660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:15.804499 systemd-logind[1481]: New session 15 of user core. Mar 25 01:35:15.809533 systemd[1]: Started session-15.scope - Session 15 of User core. Mar 25 01:35:16.090790 sshd[4663]: Connection closed by 139.178.89.65 port 48450 Mar 25 01:35:16.091733 sshd-session[4660]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:16.096807 systemd[1]: sshd@15-10.128.0.106:22-139.178.89.65:48450.service: Deactivated successfully. Mar 25 01:35:16.100035 systemd[1]: session-15.scope: Deactivated successfully. Mar 25 01:35:16.102262 systemd-logind[1481]: Session 15 logged out. Waiting for processes to exit. Mar 25 01:35:16.104089 systemd-logind[1481]: Removed session 15. Mar 25 01:35:16.173938 containerd[1510]: time="2025-03-25T01:35:16.173867276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:35:16.178315 kubelet[2785]: I0325 01:35:16.178254 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66b18176-004d-49de-83b6-365527954d1a" path="/var/lib/kubelet/pods/66b18176-004d-49de-83b6-365527954d1a/volumes" Mar 25 01:35:16.179139 kubelet[2785]: I0325 01:35:16.179002 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ebc6596e-ec52-4fcc-a122-1403cd0be893" path="/var/lib/kubelet/pods/ebc6596e-ec52-4fcc-a122-1403cd0be893/volumes" Mar 25 01:35:16.215816 systemd[1]: var-lib-kubelet-pods-ebc6596e\x2dec52\x2d4fcc\x2da122\x2d1403cd0be893-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7l7k.mount: Deactivated successfully. Mar 25 01:35:16.253493 containerd[1510]: time="2025-03-25T01:35:16.253429884Z" level=error msg="Failed to destroy network for sandbox \"14d6a82ba072aceb121871b12b59a72ddf364cb4fecf84cbd04df96733483b57\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:16.258659 containerd[1510]: time="2025-03-25T01:35:16.257047237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d6a82ba072aceb121871b12b59a72ddf364cb4fecf84cbd04df96733483b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:16.257936 systemd[1]: run-netns-cni\x2d7ff37a18\x2d8349\x2d23f8\x2d6eb7\x2de04dea704db5.mount: Deactivated successfully. Mar 25 01:35:16.258991 kubelet[2785]: E0325 01:35:16.257593 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d6a82ba072aceb121871b12b59a72ddf364cb4fecf84cbd04df96733483b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:16.258991 kubelet[2785]: E0325 01:35:16.257671 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d6a82ba072aceb121871b12b59a72ddf364cb4fecf84cbd04df96733483b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:35:16.258991 kubelet[2785]: E0325 01:35:16.257703 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"14d6a82ba072aceb121871b12b59a72ddf364cb4fecf84cbd04df96733483b57\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" Mar 25 01:35:16.259175 kubelet[2785]: E0325 01:35:16.257764 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-96c65cfff-4c2dm_calico-apiserver(7e7c4ede-6986-4ec3-afbd-544ef8b098a2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"14d6a82ba072aceb121871b12b59a72ddf364cb4fecf84cbd04df96733483b57\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" podUID="7e7c4ede-6986-4ec3-afbd-544ef8b098a2" Mar 25 01:35:16.632318 containerd[1510]: time="2025-03-25T01:35:16.631118127Z" level=info msg="StopPodSandbox for \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\"" Mar 25 01:35:16.632318 containerd[1510]: time="2025-03-25T01:35:16.631211394Z" level=info msg="Container to stop \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:16.632318 containerd[1510]: time="2025-03-25T01:35:16.631236977Z" level=info msg="Container to stop \"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:16.632318 containerd[1510]: time="2025-03-25T01:35:16.631253509Z" level=info msg="Container to stop \"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Mar 25 01:35:16.648427 systemd[1]: cri-containerd-87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896.scope: Deactivated successfully. Mar 25 01:35:16.653642 containerd[1510]: time="2025-03-25T01:35:16.653593471Z" level=info msg="TaskExit event in podsandbox handler container_id:\"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" id:\"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" pid:3282 exit_status:137 exited_at:{seconds:1742866516 nanos:649822305}" Mar 25 01:35:16.699669 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896-rootfs.mount: Deactivated successfully. Mar 25 01:35:16.708215 containerd[1510]: time="2025-03-25T01:35:16.708154869Z" level=info msg="shim disconnected" id=87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896 namespace=k8s.io Mar 25 01:35:16.708493 containerd[1510]: time="2025-03-25T01:35:16.708321553Z" level=warning msg="cleaning up after shim disconnected" id=87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896 namespace=k8s.io Mar 25 01:35:16.708493 containerd[1510]: time="2025-03-25T01:35:16.708342671Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 25 01:35:16.729336 containerd[1510]: time="2025-03-25T01:35:16.724475647Z" level=info msg="received exit event sandbox_id:\"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" exit_status:137 exited_at:{seconds:1742866516 nanos:649822305}" Mar 25 01:35:16.729336 containerd[1510]: time="2025-03-25T01:35:16.724728549Z" level=info msg="TearDown network for sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" successfully" Mar 25 01:35:16.729336 containerd[1510]: time="2025-03-25T01:35:16.724756960Z" level=info msg="StopPodSandbox for \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" returns successfully" Mar 25 01:35:16.733352 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896-shm.mount: Deactivated successfully. Mar 25 01:35:16.789822 kubelet[2785]: I0325 01:35:16.788410 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjqdt\" (UniqueName: \"kubernetes.io/projected/7220257e-c04d-4718-a529-cbf87b01b9fd-kube-api-access-mjqdt\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.789822 kubelet[2785]: I0325 01:35:16.788467 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-bin-dir\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.789822 kubelet[2785]: I0325 01:35:16.788500 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-xtables-lock\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.789822 kubelet[2785]: I0325 01:35:16.788544 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-lib-modules\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.789822 kubelet[2785]: I0325 01:35:16.788571 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-run-calico\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.789822 kubelet[2785]: I0325 01:35:16.788600 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-log-dir\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.790733 kubelet[2785]: I0325 01:35:16.788630 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-flexvol-driver-host\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.790733 kubelet[2785]: I0325 01:35:16.788662 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7220257e-c04d-4718-a529-cbf87b01b9fd-tigera-ca-bundle\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.790733 kubelet[2785]: I0325 01:35:16.788691 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-lib-calico\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.790733 kubelet[2785]: I0325 01:35:16.788715 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-net-dir\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.790733 kubelet[2785]: I0325 01:35:16.788748 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7220257e-c04d-4718-a529-cbf87b01b9fd-node-certs\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.790733 kubelet[2785]: I0325 01:35:16.788795 2785 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-policysync\") pod \"7220257e-c04d-4718-a529-cbf87b01b9fd\" (UID: \"7220257e-c04d-4718-a529-cbf87b01b9fd\") " Mar 25 01:35:16.792260 kubelet[2785]: I0325 01:35:16.788916 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-policysync" (OuterVolumeSpecName: "policysync") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.792260 kubelet[2785]: I0325 01:35:16.790971 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.792260 kubelet[2785]: I0325 01:35:16.791062 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.792260 kubelet[2785]: I0325 01:35:16.791131 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.792260 kubelet[2785]: I0325 01:35:16.791160 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.793416 kubelet[2785]: I0325 01:35:16.791326 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.793416 kubelet[2785]: I0325 01:35:16.791475 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.793416 kubelet[2785]: I0325 01:35:16.791636 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.793416 kubelet[2785]: I0325 01:35:16.792371 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 25 01:35:16.807630 kubelet[2785]: I0325 01:35:16.806488 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7220257e-c04d-4718-a529-cbf87b01b9fd-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 25 01:35:16.808909 systemd[1]: var-lib-kubelet-pods-7220257e\x2dc04d\x2d4718\x2da529\x2dcbf87b01b9fd-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmjqdt.mount: Deactivated successfully. Mar 25 01:35:16.809096 systemd[1]: var-lib-kubelet-pods-7220257e\x2dc04d\x2d4718\x2da529\x2dcbf87b01b9fd-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Mar 25 01:35:16.814380 kubelet[2785]: I0325 01:35:16.812392 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7220257e-c04d-4718-a529-cbf87b01b9fd-kube-api-access-mjqdt" (OuterVolumeSpecName: "kube-api-access-mjqdt") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "kube-api-access-mjqdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 25 01:35:16.814380 kubelet[2785]: I0325 01:35:16.813268 2785 topology_manager.go:215] "Topology Admit Handler" podUID="e2760a51-e85a-4cb4-a8d9-c049399a8ee4" podNamespace="calico-system" podName="calico-node-pbjjf" Mar 25 01:35:16.814380 kubelet[2785]: E0325 01:35:16.813420 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="flexvol-driver" Mar 25 01:35:16.814380 kubelet[2785]: E0325 01:35:16.813436 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814380 kubelet[2785]: E0325 01:35:16.813450 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="66b18176-004d-49de-83b6-365527954d1a" containerName="calico-typha" Mar 25 01:35:16.814380 kubelet[2785]: E0325 01:35:16.813461 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="install-cni" Mar 25 01:35:16.814380 kubelet[2785]: E0325 01:35:16.813473 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814380 kubelet[2785]: E0325 01:35:16.813484 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814380 kubelet[2785]: I0325 01:35:16.813525 2785 memory_manager.go:354] "RemoveStaleState removing state" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814380 kubelet[2785]: I0325 01:35:16.813542 2785 memory_manager.go:354] "RemoveStaleState removing state" podUID="66b18176-004d-49de-83b6-365527954d1a" containerName="calico-typha" Mar 25 01:35:16.814913 kubelet[2785]: I0325 01:35:16.813551 2785 memory_manager.go:354] "RemoveStaleState removing state" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814913 kubelet[2785]: I0325 01:35:16.813579 2785 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7220257e-c04d-4718-a529-cbf87b01b9fd-node-certs" (OuterVolumeSpecName: "node-certs") pod "7220257e-c04d-4718-a529-cbf87b01b9fd" (UID: "7220257e-c04d-4718-a529-cbf87b01b9fd"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 25 01:35:16.814913 kubelet[2785]: E0325 01:35:16.813597 2785 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814913 kubelet[2785]: I0325 01:35:16.813637 2785 memory_manager.go:354] "RemoveStaleState removing state" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.814913 kubelet[2785]: I0325 01:35:16.813648 2785 memory_manager.go:354] "RemoveStaleState removing state" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" containerName="calico-node" Mar 25 01:35:16.828476 systemd[1]: Created slice kubepods-besteffort-pode2760a51_e85a_4cb4_a8d9_c049399a8ee4.slice - libcontainer container kubepods-besteffort-pode2760a51_e85a_4cb4_a8d9_c049399a8ee4.slice. Mar 25 01:35:16.890081 kubelet[2785]: I0325 01:35:16.889920 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-tigera-ca-bundle\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890081 kubelet[2785]: I0325 01:35:16.889979 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-node-certs\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890081 kubelet[2785]: I0325 01:35:16.890010 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-flexvol-driver-host\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890081 kubelet[2785]: I0325 01:35:16.890038 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-cni-bin-dir\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890081 kubelet[2785]: I0325 01:35:16.890071 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-cni-log-dir\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890493 kubelet[2785]: I0325 01:35:16.890108 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-xtables-lock\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890493 kubelet[2785]: I0325 01:35:16.890135 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-var-run-calico\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890493 kubelet[2785]: I0325 01:35:16.890162 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-var-lib-calico\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890493 kubelet[2785]: I0325 01:35:16.890189 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhsl4\" (UniqueName: \"kubernetes.io/projected/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-kube-api-access-bhsl4\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890493 kubelet[2785]: I0325 01:35:16.890213 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-policysync\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890793 kubelet[2785]: I0325 01:35:16.890243 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-lib-modules\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890793 kubelet[2785]: I0325 01:35:16.890269 2785 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e2760a51-e85a-4cb4-a8d9-c049399a8ee4-cni-net-dir\") pod \"calico-node-pbjjf\" (UID: \"e2760a51-e85a-4cb4-a8d9-c049399a8ee4\") " pod="calico-system/calico-node-pbjjf" Mar 25 01:35:16.890793 kubelet[2785]: I0325 01:35:16.890322 2785 reconciler_common.go:289] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-log-dir\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.890793 kubelet[2785]: I0325 01:35:16.890341 2785 reconciler_common.go:289] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-flexvol-driver-host\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.890793 kubelet[2785]: I0325 01:35:16.890360 2785 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7220257e-c04d-4718-a529-cbf87b01b9fd-tigera-ca-bundle\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.890793 kubelet[2785]: I0325 01:35:16.890379 2785 reconciler_common.go:289] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-lib-calico\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890395 2785 reconciler_common.go:289] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-net-dir\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890412 2785 reconciler_common.go:289] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-policysync\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890429 2785 reconciler_common.go:289] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7220257e-c04d-4718-a529-cbf87b01b9fd-node-certs\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890445 2785 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-mjqdt\" (UniqueName: \"kubernetes.io/projected/7220257e-c04d-4718-a529-cbf87b01b9fd-kube-api-access-mjqdt\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890463 2785 reconciler_common.go:289] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-cni-bin-dir\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890482 2785 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-xtables-lock\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.891568 kubelet[2785]: I0325 01:35:16.890498 2785 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-lib-modules\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:16.892238 kubelet[2785]: I0325 01:35:16.890515 2785 reconciler_common.go:289] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7220257e-c04d-4718-a529-cbf87b01b9fd-var-run-calico\") on node \"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal\" DevicePath \"\"" Mar 25 01:35:17.134038 containerd[1510]: time="2025-03-25T01:35:17.133965497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pbjjf,Uid:e2760a51-e85a-4cb4-a8d9-c049399a8ee4,Namespace:calico-system,Attempt:0,}" Mar 25 01:35:17.154726 containerd[1510]: time="2025-03-25T01:35:17.154372562Z" level=info msg="connecting to shim a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e" address="unix:///run/containerd/s/5cf997327a5cfdf80407137562981ad15437bf995ad5cfe06dcc9848b73c58d7" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:17.189547 systemd[1]: Started cri-containerd-a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e.scope - libcontainer container a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e. Mar 25 01:35:17.227782 systemd[1]: var-lib-kubelet-pods-7220257e\x2dc04d\x2d4718\x2da529\x2dcbf87b01b9fd-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Mar 25 01:35:17.245462 containerd[1510]: time="2025-03-25T01:35:17.245411263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pbjjf,Uid:e2760a51-e85a-4cb4-a8d9-c049399a8ee4,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\"" Mar 25 01:35:17.249927 containerd[1510]: time="2025-03-25T01:35:17.249876458Z" level=info msg="CreateContainer within sandbox \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 25 01:35:17.264592 containerd[1510]: time="2025-03-25T01:35:17.261445107Z" level=info msg="Container 2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:17.273084 containerd[1510]: time="2025-03-25T01:35:17.273042527Z" level=info msg="CreateContainer within sandbox \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\"" Mar 25 01:35:17.273777 containerd[1510]: time="2025-03-25T01:35:17.273735467Z" level=info msg="StartContainer for \"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\"" Mar 25 01:35:17.276363 containerd[1510]: time="2025-03-25T01:35:17.276055571Z" level=info msg="connecting to shim 2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869" address="unix:///run/containerd/s/5cf997327a5cfdf80407137562981ad15437bf995ad5cfe06dcc9848b73c58d7" protocol=ttrpc version=3 Mar 25 01:35:17.319762 systemd[1]: Started cri-containerd-2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869.scope - libcontainer container 2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869. Mar 25 01:35:17.412918 containerd[1510]: time="2025-03-25T01:35:17.412140309Z" level=info msg="StartContainer for \"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\" returns successfully" Mar 25 01:35:17.425739 systemd[1]: cri-containerd-2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869.scope: Deactivated successfully. Mar 25 01:35:17.426138 systemd[1]: cri-containerd-2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869.scope: Consumed 55ms CPU time, 7.9M memory peak, 6.3M written to disk. Mar 25 01:35:17.432677 containerd[1510]: time="2025-03-25T01:35:17.432489083Z" level=info msg="received exit event container_id:\"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\" id:\"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\" pid:4811 exited_at:{seconds:1742866517 nanos:431651532}" Mar 25 01:35:17.433629 containerd[1510]: time="2025-03-25T01:35:17.433558530Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\" id:\"2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869\" pid:4811 exited_at:{seconds:1742866517 nanos:431651532}" Mar 25 01:35:17.465846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2aa67074a369bf6c86c628ef76f05accd3766963a0c24f468a2a185b264f8869-rootfs.mount: Deactivated successfully. Mar 25 01:35:17.643839 containerd[1510]: time="2025-03-25T01:35:17.642462312Z" level=info msg="CreateContainer within sandbox \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 25 01:35:17.651076 kubelet[2785]: I0325 01:35:17.648996 2785 scope.go:117] "RemoveContainer" containerID="53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982" Mar 25 01:35:17.662351 systemd[1]: Removed slice kubepods-besteffort-pod7220257e_c04d_4718_a529_cbf87b01b9fd.slice - libcontainer container kubepods-besteffort-pod7220257e_c04d_4718_a529_cbf87b01b9fd.slice. Mar 25 01:35:17.663914 containerd[1510]: time="2025-03-25T01:35:17.662100041Z" level=info msg="RemoveContainer for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\"" Mar 25 01:35:17.662533 systemd[1]: kubepods-besteffort-pod7220257e_c04d_4718_a529_cbf87b01b9fd.slice: Consumed 1.221s CPU time, 188.2M memory peak, 160.4M written to disk. Mar 25 01:35:17.672934 containerd[1510]: time="2025-03-25T01:35:17.671607878Z" level=info msg="Container 4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:17.684640 containerd[1510]: time="2025-03-25T01:35:17.684597015Z" level=info msg="RemoveContainer for \"53658a5fa669c1aeccda7fbfc9b10d2de14a51a066aee491e4e6b3e4918b4982\" returns successfully" Mar 25 01:35:17.687765 kubelet[2785]: I0325 01:35:17.687582 2785 scope.go:117] "RemoveContainer" containerID="4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb" Mar 25 01:35:17.706972 containerd[1510]: time="2025-03-25T01:35:17.706928495Z" level=info msg="RemoveContainer for \"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\"" Mar 25 01:35:17.710119 containerd[1510]: time="2025-03-25T01:35:17.709916904Z" level=info msg="CreateContainer within sandbox \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\"" Mar 25 01:35:17.710545 containerd[1510]: time="2025-03-25T01:35:17.710512657Z" level=info msg="StartContainer for \"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\"" Mar 25 01:35:17.712790 containerd[1510]: time="2025-03-25T01:35:17.712625068Z" level=info msg="connecting to shim 4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae" address="unix:///run/containerd/s/5cf997327a5cfdf80407137562981ad15437bf995ad5cfe06dcc9848b73c58d7" protocol=ttrpc version=3 Mar 25 01:35:17.715150 containerd[1510]: time="2025-03-25T01:35:17.715118921Z" level=info msg="RemoveContainer for \"4ab45767d5c30d1b13aa47c7fa6905e84b7e947e679e962d6bc2f1ea621124fb\" returns successfully" Mar 25 01:35:17.715416 kubelet[2785]: I0325 01:35:17.715386 2785 scope.go:117] "RemoveContainer" containerID="4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849" Mar 25 01:35:17.719989 containerd[1510]: time="2025-03-25T01:35:17.719882426Z" level=info msg="RemoveContainer for \"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\"" Mar 25 01:35:17.725436 containerd[1510]: time="2025-03-25T01:35:17.725333654Z" level=info msg="RemoveContainer for \"4f97effacb13b08c447aa52fad0bd5a2e555d5ced74181f28c5a1d46c1f9f849\" returns successfully" Mar 25 01:35:17.747649 systemd[1]: Started cri-containerd-4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae.scope - libcontainer container 4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae. Mar 25 01:35:17.823035 containerd[1510]: time="2025-03-25T01:35:17.822985002Z" level=info msg="StartContainer for \"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\" returns successfully" Mar 25 01:35:18.175322 containerd[1510]: time="2025-03-25T01:35:18.174377618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,}" Mar 25 01:35:18.176077 containerd[1510]: time="2025-03-25T01:35:18.175752578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:18.185388 kubelet[2785]: I0325 01:35:18.184604 2785 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7220257e-c04d-4718-a529-cbf87b01b9fd" path="/var/lib/kubelet/pods/7220257e-c04d-4718-a529-cbf87b01b9fd/volumes" Mar 25 01:35:18.400030 containerd[1510]: time="2025-03-25T01:35:18.399508309Z" level=error msg="Failed to destroy network for sandbox \"be20b298f2c6079a12df1ff5da7fb1b0caba84c5550c6d9e2026a0b997153abe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:18.409575 systemd[1]: run-netns-cni\x2d0d7bd83a\x2de05a\x2d6768\x2d8669\x2dc8d7acdcbf83.mount: Deactivated successfully. Mar 25 01:35:18.412706 containerd[1510]: time="2025-03-25T01:35:18.411649303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be20b298f2c6079a12df1ff5da7fb1b0caba84c5550c6d9e2026a0b997153abe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:18.412978 kubelet[2785]: E0325 01:35:18.412929 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be20b298f2c6079a12df1ff5da7fb1b0caba84c5550c6d9e2026a0b997153abe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:18.413077 kubelet[2785]: E0325 01:35:18.413012 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be20b298f2c6079a12df1ff5da7fb1b0caba84c5550c6d9e2026a0b997153abe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:35:18.413077 kubelet[2785]: E0325 01:35:18.413047 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be20b298f2c6079a12df1ff5da7fb1b0caba84c5550c6d9e2026a0b997153abe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zbxnm" Mar 25 01:35:18.413192 kubelet[2785]: E0325 01:35:18.413108 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zbxnm_calico-system(973a62e1-0a63-45ab-979a-41e23aff93de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be20b298f2c6079a12df1ff5da7fb1b0caba84c5550c6d9e2026a0b997153abe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zbxnm" podUID="973a62e1-0a63-45ab-979a-41e23aff93de" Mar 25 01:35:18.418362 containerd[1510]: time="2025-03-25T01:35:18.418311219Z" level=error msg="Failed to destroy network for sandbox \"e5529beca268811ad3675e3b06237ae1fa3a98bb624ca0293eb26407bc50cc56\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:18.422014 containerd[1510]: time="2025-03-25T01:35:18.420072898Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5529beca268811ad3675e3b06237ae1fa3a98bb624ca0293eb26407bc50cc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:18.422183 kubelet[2785]: E0325 01:35:18.421950 2785 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5529beca268811ad3675e3b06237ae1fa3a98bb624ca0293eb26407bc50cc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 25 01:35:18.422183 kubelet[2785]: E0325 01:35:18.422028 2785 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5529beca268811ad3675e3b06237ae1fa3a98bb624ca0293eb26407bc50cc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:35:18.422183 kubelet[2785]: E0325 01:35:18.422062 2785 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e5529beca268811ad3675e3b06237ae1fa3a98bb624ca0293eb26407bc50cc56\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-pkm5c" Mar 25 01:35:18.422400 kubelet[2785]: E0325 01:35:18.422122 2785 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-pkm5c_kube-system(0ae54105-2fc2-4712-bcc0-03edef18e454)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e5529beca268811ad3675e3b06237ae1fa3a98bb624ca0293eb26407bc50cc56\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-pkm5c" podUID="0ae54105-2fc2-4712-bcc0-03edef18e454" Mar 25 01:35:18.424599 systemd[1]: run-netns-cni\x2df6720932\x2d4c8c\x2d75f8\x2d24f1\x2d26b5a065c1ea.mount: Deactivated successfully. Mar 25 01:35:18.848355 systemd[1]: cri-containerd-4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae.scope: Deactivated successfully. Mar 25 01:35:18.849661 containerd[1510]: time="2025-03-25T01:35:18.849479355Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\" id:\"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\" pid:4861 exited_at:{seconds:1742866518 nanos:849049495}" Mar 25 01:35:18.849661 containerd[1510]: time="2025-03-25T01:35:18.849578561Z" level=info msg="received exit event container_id:\"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\" id:\"4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae\" pid:4861 exited_at:{seconds:1742866518 nanos:849049495}" Mar 25 01:35:18.884907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4804d7a6baf804fbe3b0fc0d5aaffdeed8d60daa28a9f6c024c1f691889939ae-rootfs.mount: Deactivated successfully. Mar 25 01:35:19.684872 containerd[1510]: time="2025-03-25T01:35:19.684822739Z" level=info msg="CreateContainer within sandbox \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 25 01:35:19.704530 containerd[1510]: time="2025-03-25T01:35:19.700216627Z" level=info msg="Container 32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:19.719572 containerd[1510]: time="2025-03-25T01:35:19.719515491Z" level=info msg="CreateContainer within sandbox \"a6328017fffe7aeb64f191c3b9af341b5b06815564a07d85a31999a51fb8075e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687\"" Mar 25 01:35:19.720322 containerd[1510]: time="2025-03-25T01:35:19.720227045Z" level=info msg="StartContainer for \"32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687\"" Mar 25 01:35:19.723233 containerd[1510]: time="2025-03-25T01:35:19.722906663Z" level=info msg="connecting to shim 32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687" address="unix:///run/containerd/s/5cf997327a5cfdf80407137562981ad15437bf995ad5cfe06dcc9848b73c58d7" protocol=ttrpc version=3 Mar 25 01:35:19.756210 systemd[1]: Started cri-containerd-32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687.scope - libcontainer container 32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687. Mar 25 01:35:19.828200 containerd[1510]: time="2025-03-25T01:35:19.827997754Z" level=info msg="StartContainer for \"32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687\" returns successfully" Mar 25 01:35:20.698159 kubelet[2785]: I0325 01:35:20.698081 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pbjjf" podStartSLOduration=4.698053707 podStartE2EDuration="4.698053707s" podCreationTimestamp="2025-03-25 01:35:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:20.697406305 +0000 UTC m=+90.683281434" watchObservedRunningTime="2025-03-25 01:35:20.698053707 +0000 UTC m=+90.683928836" Mar 25 01:35:20.761840 containerd[1510]: time="2025-03-25T01:35:20.761781004Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687\" id:\"7dc45f056ab49e2cd4344005e838023a4bf1dd7f07f9b0cfc8bfa91900809247\" pid:5020 exit_status:1 exited_at:{seconds:1742866520 nanos:761423292}" Mar 25 01:35:21.144640 systemd[1]: Started sshd@16-10.128.0.106:22-139.178.89.65:38300.service - OpenSSH per-connection server daemon (139.178.89.65:38300). Mar 25 01:35:21.475633 sshd[5032]: Accepted publickey for core from 139.178.89.65 port 38300 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:21.476772 sshd-session[5032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:21.492212 systemd-logind[1481]: New session 16 of user core. Mar 25 01:35:21.497715 systemd[1]: Started session-16.scope - Session 16 of User core. Mar 25 01:35:21.611323 kernel: bpftool[5154]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 25 01:35:21.851443 sshd[5126]: Connection closed by 139.178.89.65 port 38300 Mar 25 01:35:21.852179 sshd-session[5032]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:21.859068 systemd[1]: sshd@16-10.128.0.106:22-139.178.89.65:38300.service: Deactivated successfully. Mar 25 01:35:21.859602 systemd-logind[1481]: Session 16 logged out. Waiting for processes to exit. Mar 25 01:35:21.865177 systemd[1]: session-16.scope: Deactivated successfully. Mar 25 01:35:21.870085 systemd-logind[1481]: Removed session 16. Mar 25 01:35:21.876126 containerd[1510]: time="2025-03-25T01:35:21.875946196Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687\" id:\"bb0ad91c81fd60a2e619acf76f51783ed74ea5ed50d1803fd158416952982e18\" pid:5173 exit_status:1 exited_at:{seconds:1742866521 nanos:874829827}" Mar 25 01:35:22.077368 systemd-networkd[1399]: vxlan.calico: Link UP Mar 25 01:35:22.077413 systemd-networkd[1399]: vxlan.calico: Gained carrier Mar 25 01:35:23.519675 systemd-networkd[1399]: vxlan.calico: Gained IPv6LL Mar 25 01:35:24.174324 containerd[1510]: time="2025-03-25T01:35:24.173730379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:24.331511 systemd-networkd[1399]: cali4f8ae8e8677: Link UP Mar 25 01:35:24.332708 systemd-networkd[1399]: cali4f8ae8e8677: Gained carrier Mar 25 01:35:24.366826 containerd[1510]: 2025-03-25 01:35:24.226 [INFO][5262] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0 coredns-7db6d8ff4d- kube-system 7890deb5-f2f5-4b95-bbc2-b10706e5343d 761 0 2025-03-25 01:34:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal coredns-7db6d8ff4d-glkrr eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4f8ae8e8677 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-" Mar 25 01:35:24.366826 containerd[1510]: 2025-03-25 01:35:24.226 [INFO][5262] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.366826 containerd[1510]: 2025-03-25 01:35:24.270 [INFO][5274] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" HandleID="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.283 [INFO][5274] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" HandleID="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292b10), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-glkrr", "timestamp":"2025-03-25 01:35:24.270482133 +0000 UTC"}, Hostname:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.283 [INFO][5274] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.283 [INFO][5274] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.283 [INFO][5274] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal' Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.285 [INFO][5274] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.290 [INFO][5274] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.297 [INFO][5274] ipam/ipam.go 489: Trying affinity for 192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367284 containerd[1510]: 2025-03-25 01:35:24.299 [INFO][5274] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.302 [INFO][5274] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.302 [INFO][5274] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.64/26 handle="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.305 [INFO][5274] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.312 [INFO][5274] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.64/26 handle="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.321 [INFO][5274] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.65/26] block=192.168.115.64/26 handle="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.321 [INFO][5274] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.65/26] handle="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.321 [INFO][5274] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:35:24.367856 containerd[1510]: 2025-03-25 01:35:24.321 [INFO][5274] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.65/26] IPv6=[] ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" HandleID="k8s-pod-network.a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.368719 containerd[1510]: 2025-03-25 01:35:24.324 [INFO][5262] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7890deb5-f2f5-4b95-bbc2-b10706e5343d", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-glkrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f8ae8e8677", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:24.368719 containerd[1510]: 2025-03-25 01:35:24.324 [INFO][5262] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.65/32] ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.368719 containerd[1510]: 2025-03-25 01:35:24.324 [INFO][5262] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f8ae8e8677 ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.368719 containerd[1510]: 2025-03-25 01:35:24.332 [INFO][5262] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.368719 containerd[1510]: 2025-03-25 01:35:24.333 [INFO][5262] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7890deb5-f2f5-4b95-bbc2-b10706e5343d", ResourceVersion:"761", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb", Pod:"coredns-7db6d8ff4d-glkrr", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4f8ae8e8677", MAC:"76:1a:31:5a:01:5e", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:24.368719 containerd[1510]: 2025-03-25 01:35:24.359 [INFO][5262] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" Namespace="kube-system" Pod="coredns-7db6d8ff4d-glkrr" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--glkrr-eth0" Mar 25 01:35:24.428132 containerd[1510]: time="2025-03-25T01:35:24.427086019Z" level=info msg="connecting to shim a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb" address="unix:///run/containerd/s/b0f3c811d3beb914fd8dacf94abef94061fa99dc2cbb3be1f5ee0026a3181102" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:24.479525 systemd[1]: Started cri-containerd-a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb.scope - libcontainer container a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb. Mar 25 01:35:24.550098 containerd[1510]: time="2025-03-25T01:35:24.550033255Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-glkrr,Uid:7890deb5-f2f5-4b95-bbc2-b10706e5343d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb\"" Mar 25 01:35:24.556508 containerd[1510]: time="2025-03-25T01:35:24.555726440Z" level=info msg="CreateContainer within sandbox \"a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:35:24.568482 containerd[1510]: time="2025-03-25T01:35:24.568440190Z" level=info msg="Container 85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:24.575319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1658620418.mount: Deactivated successfully. Mar 25 01:35:24.582912 containerd[1510]: time="2025-03-25T01:35:24.582853851Z" level=info msg="CreateContainer within sandbox \"a89b93d624bab9abe82abb991bfc473879228c3485ec66bcaaef698b04613eeb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0\"" Mar 25 01:35:24.585335 containerd[1510]: time="2025-03-25T01:35:24.583602485Z" level=info msg="StartContainer for \"85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0\"" Mar 25 01:35:24.585335 containerd[1510]: time="2025-03-25T01:35:24.584951875Z" level=info msg="connecting to shim 85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0" address="unix:///run/containerd/s/b0f3c811d3beb914fd8dacf94abef94061fa99dc2cbb3be1f5ee0026a3181102" protocol=ttrpc version=3 Mar 25 01:35:24.611521 systemd[1]: Started cri-containerd-85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0.scope - libcontainer container 85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0. Mar 25 01:35:24.656346 containerd[1510]: time="2025-03-25T01:35:24.656257691Z" level=info msg="StartContainer for \"85b7d3652fb3b253f4d0f6bcdf7f81739dd672d3d71134fd4214cc329b0addb0\" returns successfully" Mar 25 01:35:25.697571 systemd-networkd[1399]: cali4f8ae8e8677: Gained IPv6LL Mar 25 01:35:25.705679 kubelet[2785]: I0325 01:35:25.705597 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-glkrr" podStartSLOduration=79.705568819 podStartE2EDuration="1m19.705568819s" podCreationTimestamp="2025-03-25 01:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:24.713000169 +0000 UTC m=+94.698875301" watchObservedRunningTime="2025-03-25 01:35:25.705568819 +0000 UTC m=+95.691443949" Mar 25 01:35:26.911219 systemd[1]: Started sshd@17-10.128.0.106:22-139.178.89.65:38304.service - OpenSSH per-connection server daemon (139.178.89.65:38304). Mar 25 01:35:27.210242 sshd[5382]: Accepted publickey for core from 139.178.89.65 port 38304 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:27.212129 sshd-session[5382]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:27.219810 systemd-logind[1481]: New session 17 of user core. Mar 25 01:35:27.225521 systemd[1]: Started session-17.scope - Session 17 of User core. Mar 25 01:35:27.518342 sshd[5384]: Connection closed by 139.178.89.65 port 38304 Mar 25 01:35:27.519643 sshd-session[5382]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:27.524016 systemd[1]: sshd@17-10.128.0.106:22-139.178.89.65:38304.service: Deactivated successfully. Mar 25 01:35:27.527343 systemd[1]: session-17.scope: Deactivated successfully. Mar 25 01:35:27.529796 systemd-logind[1481]: Session 17 logged out. Waiting for processes to exit. Mar 25 01:35:27.531377 systemd-logind[1481]: Removed session 17. Mar 25 01:35:28.173747 containerd[1510]: time="2025-03-25T01:35:28.173651911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:35:28.320688 systemd-networkd[1399]: cali29d5a6f7bfe: Link UP Mar 25 01:35:28.321017 systemd-networkd[1399]: cali29d5a6f7bfe: Gained carrier Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.229 [INFO][5401] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0 calico-apiserver-96c65cfff- calico-apiserver 7e7c4ede-6986-4ec3-afbd-544ef8b098a2 763 0 2025-03-25 01:34:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96c65cfff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal calico-apiserver-96c65cfff-4c2dm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali29d5a6f7bfe [] []}} ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.229 [INFO][5401] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.269 [INFO][5412] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" HandleID="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.281 [INFO][5412] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" HandleID="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ece30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", "pod":"calico-apiserver-96c65cfff-4c2dm", "timestamp":"2025-03-25 01:35:28.269655188 +0000 UTC"}, Hostname:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.281 [INFO][5412] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.281 [INFO][5412] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.281 [INFO][5412] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal' Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.283 [INFO][5412] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.288 [INFO][5412] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.293 [INFO][5412] ipam/ipam.go 489: Trying affinity for 192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.295 [INFO][5412] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.299 [INFO][5412] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.299 [INFO][5412] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.64/26 handle="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.300 [INFO][5412] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7 Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.306 [INFO][5412] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.64/26 handle="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.314 [INFO][5412] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.66/26] block=192.168.115.64/26 handle="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.314 [INFO][5412] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.66/26] handle="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.314 [INFO][5412] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:35:28.344890 containerd[1510]: 2025-03-25 01:35:28.314 [INFO][5412] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.66/26] IPv6=[] ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" HandleID="k8s-pod-network.59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.347102 containerd[1510]: 2025-03-25 01:35:28.317 [INFO][5401] cni-plugin/k8s.go 386: Populated endpoint ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0", GenerateName:"calico-apiserver-96c65cfff-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e7c4ede-6986-4ec3-afbd-544ef8b098a2", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c65cfff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-96c65cfff-4c2dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d5a6f7bfe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:28.347102 containerd[1510]: 2025-03-25 01:35:28.317 [INFO][5401] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.66/32] ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.347102 containerd[1510]: 2025-03-25 01:35:28.317 [INFO][5401] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29d5a6f7bfe ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.347102 containerd[1510]: 2025-03-25 01:35:28.320 [INFO][5401] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.347102 containerd[1510]: 2025-03-25 01:35:28.323 [INFO][5401] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0", GenerateName:"calico-apiserver-96c65cfff-", Namespace:"calico-apiserver", SelfLink:"", UID:"7e7c4ede-6986-4ec3-afbd-544ef8b098a2", ResourceVersion:"763", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c65cfff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7", Pod:"calico-apiserver-96c65cfff-4c2dm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali29d5a6f7bfe", MAC:"ce:e5:ae:65:48:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:28.347102 containerd[1510]: 2025-03-25 01:35:28.342 [INFO][5401] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-4c2dm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--4c2dm-eth0" Mar 25 01:35:28.392586 containerd[1510]: time="2025-03-25T01:35:28.392516142Z" level=info msg="connecting to shim 59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7" address="unix:///run/containerd/s/fba8b6b92a79763bd28a278f7f533b1be79b0660a8626688f6553dfcf2c52d19" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:28.433560 systemd[1]: Started cri-containerd-59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7.scope - libcontainer container 59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7. Mar 25 01:35:28.503500 containerd[1510]: time="2025-03-25T01:35:28.503441341Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-4c2dm,Uid:7e7c4ede-6986-4ec3-afbd-544ef8b098a2,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7\"" Mar 25 01:35:28.506002 containerd[1510]: time="2025-03-25T01:35:28.505960262Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 01:35:29.174063 containerd[1510]: time="2025-03-25T01:35:29.173946327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,}" Mar 25 01:35:29.341105 systemd-networkd[1399]: califfacc1c03af: Link UP Mar 25 01:35:29.342581 systemd-networkd[1399]: califfacc1c03af: Gained carrier Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.226 [INFO][5480] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0 calico-apiserver-96c65cfff- calico-apiserver 58b0bc19-665e-4afb-b4c7-59149aa71196 760 0 2025-03-25 01:34:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:96c65cfff projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal calico-apiserver-96c65cfff-xcvmz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] califfacc1c03af [] []}} ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.226 [INFO][5480] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.268 [INFO][5491] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" HandleID="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.280 [INFO][5491] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" HandleID="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000309760), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", "pod":"calico-apiserver-96c65cfff-xcvmz", "timestamp":"2025-03-25 01:35:29.26897756 +0000 UTC"}, Hostname:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.280 [INFO][5491] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.280 [INFO][5491] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.280 [INFO][5491] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal' Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.282 [INFO][5491] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.288 [INFO][5491] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.296 [INFO][5491] ipam/ipam.go 489: Trying affinity for 192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.300 [INFO][5491] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.304 [INFO][5491] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.304 [INFO][5491] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.64/26 handle="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.306 [INFO][5491] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.314 [INFO][5491] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.64/26 handle="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.328 [INFO][5491] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.67/26] block=192.168.115.64/26 handle="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.328 [INFO][5491] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.67/26] handle="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.328 [INFO][5491] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:35:29.372203 containerd[1510]: 2025-03-25 01:35:29.328 [INFO][5491] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.67/26] IPv6=[] ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" HandleID="k8s-pod-network.1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.376000 containerd[1510]: 2025-03-25 01:35:29.332 [INFO][5480] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0", GenerateName:"calico-apiserver-96c65cfff-", Namespace:"calico-apiserver", SelfLink:"", UID:"58b0bc19-665e-4afb-b4c7-59149aa71196", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c65cfff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"", Pod:"calico-apiserver-96c65cfff-xcvmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califfacc1c03af", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:29.376000 containerd[1510]: 2025-03-25 01:35:29.332 [INFO][5480] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.67/32] ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.376000 containerd[1510]: 2025-03-25 01:35:29.332 [INFO][5480] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califfacc1c03af ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.376000 containerd[1510]: 2025-03-25 01:35:29.343 [INFO][5480] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.376000 containerd[1510]: 2025-03-25 01:35:29.344 [INFO][5480] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0", GenerateName:"calico-apiserver-96c65cfff-", Namespace:"calico-apiserver", SelfLink:"", UID:"58b0bc19-665e-4afb-b4c7-59149aa71196", ResourceVersion:"760", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"96c65cfff", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf", Pod:"calico-apiserver-96c65cfff-xcvmz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.115.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"califfacc1c03af", MAC:"9a:da:70:50:32:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:29.376000 containerd[1510]: 2025-03-25 01:35:29.368 [INFO][5480] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" Namespace="calico-apiserver" Pod="calico-apiserver-96c65cfff-xcvmz" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-calico--apiserver--96c65cfff--xcvmz-eth0" Mar 25 01:35:29.460661 containerd[1510]: time="2025-03-25T01:35:29.460477518Z" level=info msg="connecting to shim 1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf" address="unix:///run/containerd/s/54f6bd538bb747aa128f655c0a045316e2d4afe3818ce0ad16f5e39fd91c1387" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:29.518655 systemd[1]: Started cri-containerd-1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf.scope - libcontainer container 1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf. Mar 25 01:35:29.535603 systemd-networkd[1399]: cali29d5a6f7bfe: Gained IPv6LL Mar 25 01:35:29.625362 containerd[1510]: time="2025-03-25T01:35:29.624669471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-96c65cfff-xcvmz,Uid:58b0bc19-665e-4afb-b4c7-59149aa71196,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf\"" Mar 25 01:35:30.174616 containerd[1510]: time="2025-03-25T01:35:30.173914721Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,}" Mar 25 01:35:30.413905 systemd-networkd[1399]: calica296bf9fda: Link UP Mar 25 01:35:30.418382 systemd-networkd[1399]: calica296bf9fda: Gained carrier Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.250 [INFO][5564] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0 csi-node-driver- calico-system 973a62e1-0a63-45ab-979a-41e23aff93de 638 0 2025-03-25 01:34:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal csi-node-driver-zbxnm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calica296bf9fda [] []}} ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.251 [INFO][5564] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.328 [INFO][5577] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" HandleID="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.357 [INFO][5577] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" HandleID="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a9570), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", "pod":"csi-node-driver-zbxnm", "timestamp":"2025-03-25 01:35:30.328453638 +0000 UTC"}, Hostname:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.357 [INFO][5577] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.357 [INFO][5577] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.357 [INFO][5577] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal' Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.360 [INFO][5577] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.366 [INFO][5577] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.373 [INFO][5577] ipam/ipam.go 489: Trying affinity for 192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.376 [INFO][5577] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.380 [INFO][5577] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.381 [INFO][5577] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.64/26 handle="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.383 [INFO][5577] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973 Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.391 [INFO][5577] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.64/26 handle="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.403 [INFO][5577] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.68/26] block=192.168.115.64/26 handle="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.403 [INFO][5577] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.68/26] handle="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.403 [INFO][5577] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:35:30.454540 containerd[1510]: 2025-03-25 01:35:30.404 [INFO][5577] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.68/26] IPv6=[] ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" HandleID="k8s-pod-network.3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.456832 containerd[1510]: 2025-03-25 01:35:30.406 [INFO][5564] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"973a62e1-0a63-45ab-979a-41e23aff93de", ResourceVersion:"638", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"", Pod:"csi-node-driver-zbxnm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica296bf9fda", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:30.456832 containerd[1510]: 2025-03-25 01:35:30.407 [INFO][5564] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.68/32] ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.456832 containerd[1510]: 2025-03-25 01:35:30.407 [INFO][5564] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calica296bf9fda ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.456832 containerd[1510]: 2025-03-25 01:35:30.413 [INFO][5564] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.456832 containerd[1510]: 2025-03-25 01:35:30.420 [INFO][5564] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"973a62e1-0a63-45ab-979a-41e23aff93de", ResourceVersion:"638", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973", Pod:"csi-node-driver-zbxnm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.115.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calica296bf9fda", MAC:"aa:af:8d:92:fb:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:30.456832 containerd[1510]: 2025-03-25 01:35:30.450 [INFO][5564] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" Namespace="calico-system" Pod="csi-node-driver-zbxnm" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-csi--node--driver--zbxnm-eth0" Mar 25 01:35:30.532144 containerd[1510]: time="2025-03-25T01:35:30.532079079Z" level=info msg="connecting to shim 3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973" address="unix:///run/containerd/s/67cc5638c7681031be591218dea42bc8346b584aa5717cce838693d930884f64" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:30.580917 systemd[1]: Started cri-containerd-3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973.scope - libcontainer container 3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973. Mar 25 01:35:30.668749 containerd[1510]: time="2025-03-25T01:35:30.668700820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zbxnm,Uid:973a62e1-0a63-45ab-979a-41e23aff93de,Namespace:calico-system,Attempt:0,} returns sandbox id \"3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973\"" Mar 25 01:35:31.008589 systemd-networkd[1399]: califfacc1c03af: Gained IPv6LL Mar 25 01:35:31.126795 containerd[1510]: time="2025-03-25T01:35:31.126732224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.127927 containerd[1510]: time="2025-03-25T01:35:31.127861213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=42993204" Mar 25 01:35:31.129129 containerd[1510]: time="2025-03-25T01:35:31.129058736Z" level=info msg="ImageCreate event name:\"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.131946 containerd[1510]: time="2025-03-25T01:35:31.131876617Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.132887 containerd[1510]: time="2025-03-25T01:35:31.132832817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 2.6268208s" Mar 25 01:35:31.132887 containerd[1510]: time="2025-03-25T01:35:31.132883151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 25 01:35:31.135123 containerd[1510]: time="2025-03-25T01:35:31.135058703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\"" Mar 25 01:35:31.136645 containerd[1510]: time="2025-03-25T01:35:31.136354371Z" level=info msg="CreateContainer within sandbox \"59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 01:35:31.147451 containerd[1510]: time="2025-03-25T01:35:31.147367083Z" level=info msg="Container 5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:31.160071 containerd[1510]: time="2025-03-25T01:35:31.160018302Z" level=info msg="CreateContainer within sandbox \"59743923147bc1fbfab28a92635ee8265a4bb88534f13765530d6c50d0f13be7\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200\"" Mar 25 01:35:31.161014 containerd[1510]: time="2025-03-25T01:35:31.160617416Z" level=info msg="StartContainer for \"5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200\"" Mar 25 01:35:31.163144 containerd[1510]: time="2025-03-25T01:35:31.162944408Z" level=info msg="connecting to shim 5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200" address="unix:///run/containerd/s/fba8b6b92a79763bd28a278f7f533b1be79b0660a8626688f6553dfcf2c52d19" protocol=ttrpc version=3 Mar 25 01:35:31.202506 systemd[1]: Started cri-containerd-5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200.scope - libcontainer container 5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200. Mar 25 01:35:31.273641 containerd[1510]: time="2025-03-25T01:35:31.273502037Z" level=info msg="StartContainer for \"5bb78a90458b84b3db2f19aadaa35677f8c8aa4c8470b139592239e3a1c4d200\" returns successfully" Mar 25 01:35:31.335232 containerd[1510]: time="2025-03-25T01:35:31.335174739Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:31.337479 containerd[1510]: time="2025-03-25T01:35:31.337407779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.2: active requests=0, bytes read=77" Mar 25 01:35:31.340317 containerd[1510]: time="2025-03-25T01:35:31.340112526Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" with image id \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:3623f5b60fad0da3387a8649371b53171a4b1226f4d989d2acad9145dc0ef56f\", size \"44486324\" in 204.732125ms" Mar 25 01:35:31.340317 containerd[1510]: time="2025-03-25T01:35:31.340169938Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.2\" returns image reference \"sha256:d27fc480d1ad33921c40abef2ab6828fadf6524674fdcc622f571a5abc34ad55\"" Mar 25 01:35:31.342015 containerd[1510]: time="2025-03-25T01:35:31.341751484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 25 01:35:31.345441 containerd[1510]: time="2025-03-25T01:35:31.345395984Z" level=info msg="CreateContainer within sandbox \"1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Mar 25 01:35:31.357494 containerd[1510]: time="2025-03-25T01:35:31.357451595Z" level=info msg="Container 7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:31.376027 containerd[1510]: time="2025-03-25T01:35:31.375970619Z" level=info msg="CreateContainer within sandbox \"1a8f5f91cd28d8fd3621146ade0dc7bab2c97a54a3046750d5d8699414768daf\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5\"" Mar 25 01:35:31.376973 containerd[1510]: time="2025-03-25T01:35:31.376932140Z" level=info msg="StartContainer for \"7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5\"" Mar 25 01:35:31.382266 containerd[1510]: time="2025-03-25T01:35:31.380777487Z" level=info msg="connecting to shim 7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5" address="unix:///run/containerd/s/54f6bd538bb747aa128f655c0a045316e2d4afe3818ce0ad16f5e39fd91c1387" protocol=ttrpc version=3 Mar 25 01:35:31.411736 systemd[1]: Started cri-containerd-7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5.scope - libcontainer container 7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5. Mar 25 01:35:31.501576 containerd[1510]: time="2025-03-25T01:35:31.501465646Z" level=info msg="StartContainer for \"7499b63d46c88e1780947cb27f7816f270ad91ec9c170642e642de3b783ca8f5\" returns successfully" Mar 25 01:35:31.738612 kubelet[2785]: I0325 01:35:31.738523 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96c65cfff-4c2dm" podStartSLOduration=75.109744706 podStartE2EDuration="1m17.738501115s" podCreationTimestamp="2025-03-25 01:34:14 +0000 UTC" firstStartedPulling="2025-03-25 01:35:28.505323943 +0000 UTC m=+98.491199062" lastFinishedPulling="2025-03-25 01:35:31.1340803 +0000 UTC m=+101.119955471" observedRunningTime="2025-03-25 01:35:31.737833878 +0000 UTC m=+101.723709007" watchObservedRunningTime="2025-03-25 01:35:31.738501115 +0000 UTC m=+101.724376246" Mar 25 01:35:32.178934 containerd[1510]: time="2025-03-25T01:35:32.178276727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,}" Mar 25 01:35:32.223529 systemd-networkd[1399]: calica296bf9fda: Gained IPv6LL Mar 25 01:35:32.584921 systemd[1]: Started sshd@18-10.128.0.106:22-139.178.89.65:58264.service - OpenSSH per-connection server daemon (139.178.89.65:58264). Mar 25 01:35:32.604705 systemd-networkd[1399]: cali4dd3c5fd2d4: Link UP Mar 25 01:35:32.607723 systemd-networkd[1399]: cali4dd3c5fd2d4: Gained carrier Mar 25 01:35:32.645474 kubelet[2785]: I0325 01:35:32.643870 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-96c65cfff-xcvmz" podStartSLOduration=76.929784499 podStartE2EDuration="1m18.643841444s" podCreationTimestamp="2025-03-25 01:34:14 +0000 UTC" firstStartedPulling="2025-03-25 01:35:29.627496478 +0000 UTC m=+99.613371601" lastFinishedPulling="2025-03-25 01:35:31.341553428 +0000 UTC m=+101.327428546" observedRunningTime="2025-03-25 01:35:31.759111909 +0000 UTC m=+101.744987039" watchObservedRunningTime="2025-03-25 01:35:32.643841444 +0000 UTC m=+102.629716571" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.362 [INFO][5724] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0 coredns-7db6d8ff4d- kube-system 0ae54105-2fc2-4712-bcc0-03edef18e454 756 0 2025-03-25 01:34:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal coredns-7db6d8ff4d-pkm5c eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali4dd3c5fd2d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.364 [INFO][5724] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.488 [INFO][5736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" HandleID="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.504 [INFO][5736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" HandleID="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000407f50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", "pod":"coredns-7db6d8ff4d-pkm5c", "timestamp":"2025-03-25 01:35:32.488466651 +0000 UTC"}, Hostname:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.504 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.504 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.504 [INFO][5736] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal' Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.511 [INFO][5736] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.519 [INFO][5736] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.531 [INFO][5736] ipam/ipam.go 489: Trying affinity for 192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.535 [INFO][5736] ipam/ipam.go 155: Attempting to load block cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.542 [INFO][5736] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.115.64/26 host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.542 [INFO][5736] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.115.64/26 handle="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.545 [INFO][5736] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31 Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.554 [INFO][5736] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.115.64/26 handle="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.574 [INFO][5736] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.115.69/26] block=192.168.115.64/26 handle="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.574 [INFO][5736] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.115.69/26] handle="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" host="ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal" Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.574 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 25 01:35:32.651937 containerd[1510]: 2025-03-25 01:35:32.574 [INFO][5736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.115.69/26] IPv6=[] ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" HandleID="k8s-pod-network.5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Workload="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.655132 containerd[1510]: 2025-03-25 01:35:32.591 [INFO][5724] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0ae54105-2fc2-4712-bcc0-03edef18e454", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"", Pod:"coredns-7db6d8ff4d-pkm5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4dd3c5fd2d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:32.655132 containerd[1510]: 2025-03-25 01:35:32.591 [INFO][5724] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.115.69/32] ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.655132 containerd[1510]: 2025-03-25 01:35:32.592 [INFO][5724] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4dd3c5fd2d4 ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.655132 containerd[1510]: 2025-03-25 01:35:32.611 [INFO][5724] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.655132 containerd[1510]: 2025-03-25 01:35:32.616 [INFO][5724] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"0ae54105-2fc2-4712-bcc0-03edef18e454", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.March, 25, 1, 34, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4284-0-0-be75562db3d75daa49a3.c.flatcar-212911.internal", ContainerID:"5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31", Pod:"coredns-7db6d8ff4d-pkm5c", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.115.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali4dd3c5fd2d4", MAC:"c6:ef:51:6f:5f:31", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 25 01:35:32.655132 containerd[1510]: 2025-03-25 01:35:32.644 [INFO][5724] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" Namespace="kube-system" Pod="coredns-7db6d8ff4d-pkm5c" WorkloadEndpoint="ci--4284--0--0--be75562db3d75daa49a3.c.flatcar--212911.internal-k8s-coredns--7db6d8ff4d--pkm5c-eth0" Mar 25 01:35:32.695364 containerd[1510]: time="2025-03-25T01:35:32.695278762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:32.699848 containerd[1510]: time="2025-03-25T01:35:32.699775813Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 25 01:35:32.706862 containerd[1510]: time="2025-03-25T01:35:32.706812131Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:32.728635 containerd[1510]: time="2025-03-25T01:35:32.722931311Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:32.728635 containerd[1510]: time="2025-03-25T01:35:32.726847845Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.385055055s" Mar 25 01:35:32.728635 containerd[1510]: time="2025-03-25T01:35:32.726887541Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 25 01:35:32.746325 containerd[1510]: time="2025-03-25T01:35:32.746247662Z" level=info msg="CreateContainer within sandbox \"3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 25 01:35:32.782537 containerd[1510]: time="2025-03-25T01:35:32.782469241Z" level=info msg="connecting to shim 5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31" address="unix:///run/containerd/s/7d68c7c0a8fe96606a974588d3b0b9b762633b24692928bcf58541d9464299cf" namespace=k8s.io protocol=ttrpc version=3 Mar 25 01:35:32.807697 containerd[1510]: time="2025-03-25T01:35:32.807621920Z" level=info msg="Container 7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:32.846993 containerd[1510]: time="2025-03-25T01:35:32.846853376Z" level=info msg="CreateContainer within sandbox \"3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb\"" Mar 25 01:35:32.848523 containerd[1510]: time="2025-03-25T01:35:32.848483621Z" level=info msg="StartContainer for \"7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb\"" Mar 25 01:35:32.861098 containerd[1510]: time="2025-03-25T01:35:32.860425477Z" level=info msg="connecting to shim 7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb" address="unix:///run/containerd/s/67cc5638c7681031be591218dea42bc8346b584aa5717cce838693d930884f64" protocol=ttrpc version=3 Mar 25 01:35:32.898517 systemd[1]: Started cri-containerd-5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31.scope - libcontainer container 5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31. Mar 25 01:35:32.927549 systemd[1]: Started cri-containerd-7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb.scope - libcontainer container 7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb. Mar 25 01:35:32.982743 sshd[5750]: Accepted publickey for core from 139.178.89.65 port 58264 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:32.984696 sshd-session[5750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:32.997664 systemd-logind[1481]: New session 18 of user core. Mar 25 01:35:33.003717 systemd[1]: Started session-18.scope - Session 18 of User core. Mar 25 01:35:33.062242 containerd[1510]: time="2025-03-25T01:35:33.062195813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-pkm5c,Uid:0ae54105-2fc2-4712-bcc0-03edef18e454,Namespace:kube-system,Attempt:0,} returns sandbox id \"5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31\"" Mar 25 01:35:33.072117 containerd[1510]: time="2025-03-25T01:35:33.071958677Z" level=info msg="CreateContainer within sandbox \"5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Mar 25 01:35:33.101072 containerd[1510]: time="2025-03-25T01:35:33.097625804Z" level=info msg="Container 44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:33.103131 containerd[1510]: time="2025-03-25T01:35:33.102343139Z" level=info msg="StartContainer for \"7a6e92c3ed34497a04e0b886d2442c75ad324b3df2e985469edc0b1bcaf93ccb\" returns successfully" Mar 25 01:35:33.112176 containerd[1510]: time="2025-03-25T01:35:33.111490609Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 25 01:35:33.126629 containerd[1510]: time="2025-03-25T01:35:33.126586771Z" level=info msg="CreateContainer within sandbox \"5615f88fa9c1d9c49fb49126565531631e3867458a70f9c7a4793c134538eb31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6\"" Mar 25 01:35:33.127631 containerd[1510]: time="2025-03-25T01:35:33.127568750Z" level=info msg="StartContainer for \"44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6\"" Mar 25 01:35:33.128793 containerd[1510]: time="2025-03-25T01:35:33.128754259Z" level=info msg="connecting to shim 44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6" address="unix:///run/containerd/s/7d68c7c0a8fe96606a974588d3b0b9b762633b24692928bcf58541d9464299cf" protocol=ttrpc version=3 Mar 25 01:35:33.204526 systemd[1]: Started cri-containerd-44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6.scope - libcontainer container 44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6. Mar 25 01:35:33.412334 containerd[1510]: time="2025-03-25T01:35:33.410481213Z" level=info msg="StartContainer for \"44d060bdc3c03e7903801e16efe811b2ff6483b1e78eb904c8f78174293570b6\" returns successfully" Mar 25 01:35:33.455320 sshd[5828]: Connection closed by 139.178.89.65 port 58264 Mar 25 01:35:33.454047 sshd-session[5750]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:33.461236 systemd[1]: sshd@18-10.128.0.106:22-139.178.89.65:58264.service: Deactivated successfully. Mar 25 01:35:33.467490 systemd[1]: session-18.scope: Deactivated successfully. Mar 25 01:35:33.471375 systemd-logind[1481]: Session 18 logged out. Waiting for processes to exit. Mar 25 01:35:33.475349 systemd-logind[1481]: Removed session 18. Mar 25 01:35:33.510975 systemd[1]: Started sshd@19-10.128.0.106:22-139.178.89.65:58268.service - OpenSSH per-connection server daemon (139.178.89.65:58268). Mar 25 01:35:33.761241 kubelet[2785]: I0325 01:35:33.761199 2785 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 25 01:35:33.822257 kubelet[2785]: I0325 01:35:33.822183 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-pkm5c" podStartSLOduration=87.822146782 podStartE2EDuration="1m27.822146782s" podCreationTimestamp="2025-03-25 01:34:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-03-25 01:35:33.785190763 +0000 UTC m=+103.771065895" watchObservedRunningTime="2025-03-25 01:35:33.822146782 +0000 UTC m=+103.808021912" Mar 25 01:35:33.844718 sshd[5881]: Accepted publickey for core from 139.178.89.65 port 58268 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:33.845647 sshd-session[5881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:33.863791 systemd-logind[1481]: New session 19 of user core. Mar 25 01:35:33.870521 systemd[1]: Started session-19.scope - Session 19 of User core. Mar 25 01:35:33.951563 systemd-networkd[1399]: cali4dd3c5fd2d4: Gained IPv6LL Mar 25 01:35:34.376422 sshd[5886]: Connection closed by 139.178.89.65 port 58268 Mar 25 01:35:34.378407 sshd-session[5881]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:34.393019 systemd[1]: sshd@19-10.128.0.106:22-139.178.89.65:58268.service: Deactivated successfully. Mar 25 01:35:34.400626 systemd[1]: session-19.scope: Deactivated successfully. Mar 25 01:35:34.403908 systemd-logind[1481]: Session 19 logged out. Waiting for processes to exit. Mar 25 01:35:34.407756 systemd-logind[1481]: Removed session 19. Mar 25 01:35:34.436012 systemd[1]: Started sshd@20-10.128.0.106:22-139.178.89.65:58276.service - OpenSSH per-connection server daemon (139.178.89.65:58276). Mar 25 01:35:34.612501 containerd[1510]: time="2025-03-25T01:35:34.612381168Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:34.614641 containerd[1510]: time="2025-03-25T01:35:34.614565818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 25 01:35:34.616381 containerd[1510]: time="2025-03-25T01:35:34.616038079Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:34.622371 containerd[1510]: time="2025-03-25T01:35:34.620472253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 25 01:35:34.622371 containerd[1510]: time="2025-03-25T01:35:34.621613421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.510078292s" Mar 25 01:35:34.622371 containerd[1510]: time="2025-03-25T01:35:34.622165893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 25 01:35:34.627460 containerd[1510]: time="2025-03-25T01:35:34.627345909Z" level=info msg="CreateContainer within sandbox \"3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 25 01:35:34.650247 containerd[1510]: time="2025-03-25T01:35:34.644611549Z" level=info msg="Container 69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac: CDI devices from CRI Config.CDIDevices: []" Mar 25 01:35:34.658051 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962689479.mount: Deactivated successfully. Mar 25 01:35:34.664619 containerd[1510]: time="2025-03-25T01:35:34.664552946Z" level=info msg="CreateContainer within sandbox \"3ea1d3a010236c23cc8fa572ddf334265f7a684554575e1b68e117aaf5b81973\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac\"" Mar 25 01:35:34.665866 containerd[1510]: time="2025-03-25T01:35:34.665829555Z" level=info msg="StartContainer for \"69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac\"" Mar 25 01:35:34.668732 containerd[1510]: time="2025-03-25T01:35:34.668689130Z" level=info msg="connecting to shim 69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac" address="unix:///run/containerd/s/67cc5638c7681031be591218dea42bc8346b584aa5717cce838693d930884f64" protocol=ttrpc version=3 Mar 25 01:35:34.723515 systemd[1]: Started cri-containerd-69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac.scope - libcontainer container 69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac. Mar 25 01:35:34.790198 sshd[5901]: Accepted publickey for core from 139.178.89.65 port 58276 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:34.791174 sshd-session[5901]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:34.802801 systemd-logind[1481]: New session 20 of user core. Mar 25 01:35:34.810497 systemd[1]: Started session-20.scope - Session 20 of User core. Mar 25 01:35:34.908356 containerd[1510]: time="2025-03-25T01:35:34.907666380Z" level=info msg="StartContainer for \"69a8b41b9a8d48fb273f7cf2c995e9881f7c00263245057bbde666d4ac9c56ac\" returns successfully" Mar 25 01:35:35.371510 kubelet[2785]: I0325 01:35:35.370659 2785 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 25 01:35:35.371510 kubelet[2785]: I0325 01:35:35.370703 2785 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 25 01:35:35.824874 kubelet[2785]: I0325 01:35:35.824747 2785 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zbxnm" podStartSLOduration=78.873159284 podStartE2EDuration="1m22.82472465s" podCreationTimestamp="2025-03-25 01:34:13 +0000 UTC" firstStartedPulling="2025-03-25 01:35:30.67188879 +0000 UTC m=+100.657763910" lastFinishedPulling="2025-03-25 01:35:34.623454152 +0000 UTC m=+104.609329276" observedRunningTime="2025-03-25 01:35:35.818510714 +0000 UTC m=+105.804385842" watchObservedRunningTime="2025-03-25 01:35:35.82472465 +0000 UTC m=+105.810599780" Mar 25 01:35:36.621724 ntpd[1476]: Listen normally on 7 vxlan.calico 192.168.115.64:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 7 vxlan.calico 192.168.115.64:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 8 vxlan.calico [fe80::6497:45ff:fed2:aae8%4]:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 9 cali4f8ae8e8677 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 10 cali29d5a6f7bfe [fe80::ecee:eeff:feee:eeee%8]:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 11 califfacc1c03af [fe80::ecee:eeff:feee:eeee%9]:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 12 calica296bf9fda [fe80::ecee:eeff:feee:eeee%10]:123 Mar 25 01:35:36.622751 ntpd[1476]: 25 Mar 01:35:36 ntpd[1476]: Listen normally on 13 cali4dd3c5fd2d4 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 25 01:35:36.621845 ntpd[1476]: Listen normally on 8 vxlan.calico [fe80::6497:45ff:fed2:aae8%4]:123 Mar 25 01:35:36.621922 ntpd[1476]: Listen normally on 9 cali4f8ae8e8677 [fe80::ecee:eeff:feee:eeee%7]:123 Mar 25 01:35:36.621978 ntpd[1476]: Listen normally on 10 cali29d5a6f7bfe [fe80::ecee:eeff:feee:eeee%8]:123 Mar 25 01:35:36.622035 ntpd[1476]: Listen normally on 11 califfacc1c03af [fe80::ecee:eeff:feee:eeee%9]:123 Mar 25 01:35:36.622088 ntpd[1476]: Listen normally on 12 calica296bf9fda [fe80::ecee:eeff:feee:eeee%10]:123 Mar 25 01:35:36.622138 ntpd[1476]: Listen normally on 13 cali4dd3c5fd2d4 [fe80::ecee:eeff:feee:eeee%11]:123 Mar 25 01:35:37.061242 sshd[5924]: Connection closed by 139.178.89.65 port 58276 Mar 25 01:35:37.062167 sshd-session[5901]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:37.067881 systemd[1]: sshd@20-10.128.0.106:22-139.178.89.65:58276.service: Deactivated successfully. Mar 25 01:35:37.070545 systemd[1]: session-20.scope: Deactivated successfully. Mar 25 01:35:37.070905 systemd[1]: session-20.scope: Consumed 722ms CPU time, 70.7M memory peak. Mar 25 01:35:37.073344 systemd-logind[1481]: Session 20 logged out. Waiting for processes to exit. Mar 25 01:35:37.075276 systemd-logind[1481]: Removed session 20. Mar 25 01:35:37.120788 systemd[1]: Started sshd@21-10.128.0.106:22-139.178.89.65:58282.service - OpenSSH per-connection server daemon (139.178.89.65:58282). Mar 25 01:35:37.427089 sshd[5959]: Accepted publickey for core from 139.178.89.65 port 58282 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:37.429692 sshd-session[5959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:37.440369 systemd-logind[1481]: New session 21 of user core. Mar 25 01:35:37.448428 systemd[1]: Started session-21.scope - Session 21 of User core. Mar 25 01:35:37.981489 sshd[5961]: Connection closed by 139.178.89.65 port 58282 Mar 25 01:35:37.986482 sshd-session[5959]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:37.997734 systemd-logind[1481]: Session 21 logged out. Waiting for processes to exit. Mar 25 01:35:37.999485 systemd[1]: sshd@21-10.128.0.106:22-139.178.89.65:58282.service: Deactivated successfully. Mar 25 01:35:38.009498 systemd[1]: session-21.scope: Deactivated successfully. Mar 25 01:35:38.013333 systemd-logind[1481]: Removed session 21. Mar 25 01:35:38.043472 systemd[1]: Started sshd@22-10.128.0.106:22-139.178.89.65:44920.service - OpenSSH per-connection server daemon (139.178.89.65:44920). Mar 25 01:35:38.365989 sshd[5973]: Accepted publickey for core from 139.178.89.65 port 44920 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:38.367776 sshd-session[5973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:38.379057 systemd-logind[1481]: New session 22 of user core. Mar 25 01:35:38.385663 systemd[1]: Started session-22.scope - Session 22 of User core. Mar 25 01:35:38.656821 sshd[5975]: Connection closed by 139.178.89.65 port 44920 Mar 25 01:35:38.658134 sshd-session[5973]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:38.663901 systemd[1]: sshd@22-10.128.0.106:22-139.178.89.65:44920.service: Deactivated successfully. Mar 25 01:35:38.667188 systemd[1]: session-22.scope: Deactivated successfully. Mar 25 01:35:38.668829 systemd-logind[1481]: Session 22 logged out. Waiting for processes to exit. Mar 25 01:35:38.670349 systemd-logind[1481]: Removed session 22. Mar 25 01:35:43.712600 systemd[1]: Started sshd@23-10.128.0.106:22-139.178.89.65:44928.service - OpenSSH per-connection server daemon (139.178.89.65:44928). Mar 25 01:35:44.015117 sshd[6002]: Accepted publickey for core from 139.178.89.65 port 44928 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:44.017474 sshd-session[6002]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:44.024551 systemd-logind[1481]: New session 23 of user core. Mar 25 01:35:44.033525 systemd[1]: Started session-23.scope - Session 23 of User core. Mar 25 01:35:44.306994 sshd[6004]: Connection closed by 139.178.89.65 port 44928 Mar 25 01:35:44.308486 sshd-session[6002]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:44.315450 systemd[1]: sshd@23-10.128.0.106:22-139.178.89.65:44928.service: Deactivated successfully. Mar 25 01:35:44.319411 systemd[1]: session-23.scope: Deactivated successfully. Mar 25 01:35:44.320831 systemd-logind[1481]: Session 23 logged out. Waiting for processes to exit. Mar 25 01:35:44.322661 systemd-logind[1481]: Removed session 23. Mar 25 01:35:47.228251 containerd[1510]: time="2025-03-25T01:35:47.228133068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"32a1edf2eea13ac37263d7d2401b2f15b52ccb88e2951916833a3cff8bc25687\" id:\"a40ffcb713a7488c4c98e50609ddf026cbf7e4b8f1408a1ee609a6080befd14f\" pid:6029 exited_at:{seconds:1742866547 nanos:227163138}" Mar 25 01:35:49.364647 systemd[1]: Started sshd@24-10.128.0.106:22-139.178.89.65:44896.service - OpenSSH per-connection server daemon (139.178.89.65:44896). Mar 25 01:35:49.668075 sshd[6042]: Accepted publickey for core from 139.178.89.65 port 44896 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:49.670110 sshd-session[6042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:49.676643 systemd-logind[1481]: New session 24 of user core. Mar 25 01:35:49.684515 systemd[1]: Started session-24.scope - Session 24 of User core. Mar 25 01:35:49.959901 sshd[6044]: Connection closed by 139.178.89.65 port 44896 Mar 25 01:35:49.960813 sshd-session[6042]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:49.966808 systemd[1]: sshd@24-10.128.0.106:22-139.178.89.65:44896.service: Deactivated successfully. Mar 25 01:35:49.969531 systemd[1]: session-24.scope: Deactivated successfully. Mar 25 01:35:49.970738 systemd-logind[1481]: Session 24 logged out. Waiting for processes to exit. Mar 25 01:35:49.972370 systemd-logind[1481]: Removed session 24. Mar 25 01:35:50.140835 containerd[1510]: time="2025-03-25T01:35:50.140745052Z" level=info msg="StopPodSandbox for \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\"" Mar 25 01:35:50.141755 containerd[1510]: time="2025-03-25T01:35:50.140999538Z" level=info msg="TearDown network for sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" successfully" Mar 25 01:35:50.141755 containerd[1510]: time="2025-03-25T01:35:50.141025743Z" level=info msg="StopPodSandbox for \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" returns successfully" Mar 25 01:35:50.141755 containerd[1510]: time="2025-03-25T01:35:50.141623183Z" level=info msg="RemovePodSandbox for \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\"" Mar 25 01:35:50.141755 containerd[1510]: time="2025-03-25T01:35:50.141663060Z" level=info msg="Forcibly stopping sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\"" Mar 25 01:35:50.142154 containerd[1510]: time="2025-03-25T01:35:50.141793820Z" level=info msg="TearDown network for sandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" successfully" Mar 25 01:35:50.144327 containerd[1510]: time="2025-03-25T01:35:50.144271408Z" level=info msg="Ensure that sandbox 87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896 in task-service has been cleanup successfully" Mar 25 01:35:50.148781 containerd[1510]: time="2025-03-25T01:35:50.148717486Z" level=info msg="RemovePodSandbox \"87ae1b41653d81fb9aa088d580ac4391f100c5fede283bda7900962949041896\" returns successfully" Mar 25 01:35:50.149242 containerd[1510]: time="2025-03-25T01:35:50.149206675Z" level=info msg="StopPodSandbox for \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\"" Mar 25 01:35:50.149488 containerd[1510]: time="2025-03-25T01:35:50.149379697Z" level=info msg="TearDown network for sandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" successfully" Mar 25 01:35:50.149488 containerd[1510]: time="2025-03-25T01:35:50.149405181Z" level=info msg="StopPodSandbox for \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" returns successfully" Mar 25 01:35:50.149933 containerd[1510]: time="2025-03-25T01:35:50.149872762Z" level=info msg="RemovePodSandbox for \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\"" Mar 25 01:35:50.149933 containerd[1510]: time="2025-03-25T01:35:50.149917135Z" level=info msg="Forcibly stopping sandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\"" Mar 25 01:35:50.150081 containerd[1510]: time="2025-03-25T01:35:50.150041868Z" level=info msg="TearDown network for sandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" successfully" Mar 25 01:35:50.152061 containerd[1510]: time="2025-03-25T01:35:50.151884739Z" level=info msg="Ensure that sandbox 30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92 in task-service has been cleanup successfully" Mar 25 01:35:50.155436 containerd[1510]: time="2025-03-25T01:35:50.155397819Z" level=info msg="RemovePodSandbox \"30d1d864e3e24f1c80872e4c9a08a404b0cf15f4216273651a4e7805ebbc1b92\" returns successfully" Mar 25 01:35:55.020423 systemd[1]: Started sshd@25-10.128.0.106:22-139.178.89.65:44908.service - OpenSSH per-connection server daemon (139.178.89.65:44908). Mar 25 01:35:55.327094 sshd[6059]: Accepted publickey for core from 139.178.89.65 port 44908 ssh2: RSA SHA256:jKgA4Ny8Mk2MkcudVG4JZXx/NACsxdZPaWiSnO7U3/M Mar 25 01:35:55.328927 sshd-session[6059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 25 01:35:55.336666 systemd-logind[1481]: New session 25 of user core. Mar 25 01:35:55.340506 systemd[1]: Started session-25.scope - Session 25 of User core. Mar 25 01:35:55.629982 sshd[6062]: Connection closed by 139.178.89.65 port 44908 Mar 25 01:35:55.631550 sshd-session[6059]: pam_unix(sshd:session): session closed for user core Mar 25 01:35:55.638513 systemd[1]: sshd@25-10.128.0.106:22-139.178.89.65:44908.service: Deactivated successfully. Mar 25 01:35:55.642021 systemd[1]: session-25.scope: Deactivated successfully. Mar 25 01:35:55.643468 systemd-logind[1481]: Session 25 logged out. Waiting for processes to exit. Mar 25 01:35:55.645151 systemd-logind[1481]: Removed session 25.