Jan 30 13:49:27.075610 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:49:27.075647 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:49:27.075661 kernel: BIOS-provided physical RAM map: Jan 30 13:49:27.075671 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:49:27.075680 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:49:27.075708 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:49:27.075720 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 30 13:49:27.075734 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 30 13:49:27.075744 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:49:27.075758 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:49:27.075769 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:49:27.075786 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:49:27.075806 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:49:27.075826 kernel: NX (Execute Disable) protection: active Jan 30 13:49:27.075850 kernel: APIC: Static calls initialized Jan 30 13:49:27.075862 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:49:27.075875 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 30 13:49:27.075888 kernel: SMBIOS 3.1.0 present. Jan 30 13:49:27.075901 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:49:27.075913 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:49:27.075926 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:49:27.075938 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 30 13:49:27.075950 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:49:27.075963 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:49:27.075977 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:49:27.075990 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:49:27.076003 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:49:27.076016 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:49:27.076028 kernel: tsc: Detected 2593.905 MHz processor Jan 30 13:49:27.076041 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:49:27.076054 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:49:27.076066 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:49:27.076079 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:49:27.076094 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:49:27.076107 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:49:27.076119 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:49:27.076132 kernel: Using GB pages for direct mapping Jan 30 13:49:27.076144 kernel: Secure boot disabled Jan 30 13:49:27.076156 kernel: ACPI: Early table checksum verification disabled Jan 30 13:49:27.076169 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:49:27.076187 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076203 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076216 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:49:27.076230 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:49:27.076243 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076257 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076270 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076286 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076300 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076314 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076327 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076340 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:49:27.076354 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:49:27.076367 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:49:27.076380 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:49:27.076397 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:49:27.076410 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:49:27.076423 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:49:27.076437 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:49:27.076451 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:49:27.076464 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:49:27.076478 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:49:27.076491 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:49:27.076504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:49:27.076520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:49:27.076533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:49:27.076547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:49:27.076560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:49:27.076574 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:49:27.076587 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:49:27.076601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:49:27.076615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:49:27.076628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:49:27.076644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:49:27.076658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:49:27.076671 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:49:27.076692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:49:27.076706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:49:27.076719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:49:27.076733 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:49:27.076747 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:49:27.076760 kernel: Zone ranges: Jan 30 13:49:27.076777 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:49:27.076790 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:49:27.076803 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:49:27.076817 kernel: Movable zone start for each node Jan 30 13:49:27.076830 kernel: Early memory node ranges Jan 30 13:49:27.076844 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:49:27.076857 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:49:27.076870 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:49:27.076884 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:49:27.076900 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:49:27.076913 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:49:27.076927 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:49:27.076940 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:49:27.076953 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:49:27.076967 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:49:27.076980 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:49:27.076993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:49:27.077007 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:49:27.077023 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:49:27.077036 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:49:27.077049 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:49:27.077063 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:49:27.077076 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:49:27.077090 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:49:27.077103 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:49:27.077117 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:49:27.077130 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:49:27.077145 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:49:27.077159 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:49:27.077174 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:49:27.077188 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:49:27.077201 kernel: random: crng init done Jan 30 13:49:27.077214 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:49:27.077228 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:49:27.077241 kernel: Fallback order for Node 0: 0 Jan 30 13:49:27.077257 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:49:27.077280 kernel: Policy zone: Normal Jan 30 13:49:27.077295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:49:27.077311 kernel: software IO TLB: area num 2. Jan 30 13:49:27.077326 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Jan 30 13:49:27.077340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:49:27.077354 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:49:27.077369 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:49:27.077383 kernel: Dynamic Preempt: voluntary Jan 30 13:49:27.077397 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:49:27.077413 kernel: rcu: RCU event tracing is enabled. Jan 30 13:49:27.077430 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:49:27.077444 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:49:27.077459 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:49:27.077473 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:49:27.077488 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:49:27.077505 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:49:27.077519 kernel: Using NULL legacy PIC Jan 30 13:49:27.077533 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:49:27.077547 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:49:27.077562 kernel: Console: colour dummy device 80x25 Jan 30 13:49:27.077576 kernel: printk: console [tty1] enabled Jan 30 13:49:27.077590 kernel: printk: console [ttyS0] enabled Jan 30 13:49:27.077605 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:49:27.077624 kernel: ACPI: Core revision 20230628 Jan 30 13:49:27.077638 kernel: Failed to register legacy timer interrupt Jan 30 13:49:27.077655 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:49:27.077669 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:49:27.077697 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:49:27.077710 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:49:27.077736 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:49:27.077764 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:49:27.077786 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:49:27.077797 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:49:27.077810 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:49:27.077829 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 30 13:49:27.077843 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:49:27.077857 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:49:27.077869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:49:27.077880 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:49:27.077892 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:49:27.077906 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:49:27.077920 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:49:27.077934 kernel: RETBleed: Vulnerable Jan 30 13:49:27.077948 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:49:27.077960 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:49:27.077973 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:49:27.077986 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:49:27.077998 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:49:27.078009 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:49:27.078031 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:49:27.078045 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:49:27.078057 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:49:27.078069 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:49:27.078081 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:49:27.078097 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:49:27.078112 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:49:27.078123 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:49:27.078135 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:49:27.078147 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:49:27.078160 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:49:27.078174 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:49:27.078188 kernel: landlock: Up and running. Jan 30 13:49:27.078201 kernel: SELinux: Initializing. Jan 30 13:49:27.078214 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.078228 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.078243 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:49:27.078262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:49:27.078277 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:49:27.078292 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:49:27.078307 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:49:27.078322 kernel: signal: max sigframe size: 3632 Jan 30 13:49:27.078336 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:49:27.078352 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:49:27.078367 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:49:27.078381 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:49:27.078399 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:49:27.078414 kernel: .... node #0, CPUs: #1 Jan 30 13:49:27.078429 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:49:27.078446 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:49:27.078461 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:49:27.078476 kernel: smpboot: Max logical packages: 1 Jan 30 13:49:27.078491 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:49:27.078506 kernel: devtmpfs: initialized Jan 30 13:49:27.078524 kernel: x86/mm: Memory block size: 128MB Jan 30 13:49:27.078539 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:49:27.078554 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:49:27.078573 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:49:27.078586 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:49:27.078599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:49:27.078614 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:49:27.078628 kernel: audit: type=2000 audit(1738244966.027:1): state=initialized audit_enabled=0 res=1 Jan 30 13:49:27.078643 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:49:27.078660 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:49:27.078675 kernel: cpuidle: using governor menu Jan 30 13:49:27.078707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:49:27.078722 kernel: dca service started, version 1.12.1 Jan 30 13:49:27.078737 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:49:27.078752 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:49:27.078767 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:49:27.078781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:49:27.078792 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:49:27.078808 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:49:27.078820 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:49:27.078833 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:49:27.078846 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:49:27.078859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:49:27.078874 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:49:27.078888 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:49:27.078900 kernel: ACPI: Interpreter enabled Jan 30 13:49:27.078911 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:49:27.078926 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:49:27.078939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:49:27.078952 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:49:27.078964 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:49:27.078978 kernel: iommu: Default domain type: Translated Jan 30 13:49:27.078991 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:49:27.079003 kernel: efivars: Registered efivars operations Jan 30 13:49:27.079016 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:49:27.079028 kernel: PCI: System does not support PCI Jan 30 13:49:27.079043 kernel: vgaarb: loaded Jan 30 13:49:27.079057 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:49:27.079071 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:49:27.079084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:49:27.079098 kernel: pnp: PnP ACPI init Jan 30 13:49:27.079111 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:49:27.079124 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:49:27.079137 kernel: NET: Registered PF_INET protocol family Jan 30 13:49:27.079150 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:49:27.079166 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:49:27.079180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:49:27.079192 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:49:27.079205 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:49:27.079217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:49:27.079232 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.079247 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.079261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:49:27.079275 kernel: NET: Registered PF_XDP protocol family Jan 30 13:49:27.079294 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:49:27.079308 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:49:27.079323 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 30 13:49:27.079337 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:49:27.079352 kernel: Initialise system trusted keyrings Jan 30 13:49:27.079367 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:49:27.079379 kernel: Key type asymmetric registered Jan 30 13:49:27.079393 kernel: Asymmetric key parser 'x509' registered Jan 30 13:49:27.079406 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:49:27.079423 kernel: io scheduler mq-deadline registered Jan 30 13:49:27.079436 kernel: io scheduler kyber registered Jan 30 13:49:27.079449 kernel: io scheduler bfq registered Jan 30 13:49:27.079461 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:49:27.079474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:49:27.079489 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:49:27.079502 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:49:27.079516 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:49:27.083134 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:49:27.083325 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:49:26 UTC (1738244966) Jan 30 13:49:27.083446 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:49:27.083464 kernel: intel_pstate: CPU model not supported Jan 30 13:49:27.083480 kernel: efifb: probing for efifb Jan 30 13:49:27.083496 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:49:27.083511 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:49:27.083526 kernel: efifb: scrolling: redraw Jan 30 13:49:27.083546 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:49:27.083559 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:49:27.083572 kernel: fb0: EFI VGA frame buffer device Jan 30 13:49:27.083585 kernel: pstore: Using crash dump compression: deflate Jan 30 13:49:27.083598 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:49:27.083614 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:49:27.083629 kernel: Segment Routing with IPv6 Jan 30 13:49:27.083644 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:49:27.083661 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:49:27.083675 kernel: Key type dns_resolver registered Jan 30 13:49:27.083712 kernel: IPI shorthand broadcast: enabled Jan 30 13:49:27.083725 kernel: sched_clock: Marking stable (824002900, 41204400)->(1061829800, -196622500) Jan 30 13:49:27.083739 kernel: registered taskstats version 1 Jan 30 13:49:27.083753 kernel: Loading compiled-in X.509 certificates Jan 30 13:49:27.083766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:49:27.083778 kernel: Key type .fscrypt registered Jan 30 13:49:27.083793 kernel: Key type fscrypt-provisioning registered Jan 30 13:49:27.083807 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:49:27.083825 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:49:27.083839 kernel: ima: No architecture policies found Jan 30 13:49:27.083853 kernel: clk: Disabling unused clocks Jan 30 13:49:27.083867 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:49:27.083881 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:49:27.083894 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:49:27.083908 kernel: Run /init as init process Jan 30 13:49:27.083923 kernel: with arguments: Jan 30 13:49:27.083937 kernel: /init Jan 30 13:49:27.083952 kernel: with environment: Jan 30 13:49:27.083976 kernel: HOME=/ Jan 30 13:49:27.083990 kernel: TERM=linux Jan 30 13:49:27.084004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:49:27.084023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:49:27.084042 systemd[1]: Detected virtualization microsoft. Jan 30 13:49:27.084058 systemd[1]: Detected architecture x86-64. Jan 30 13:49:27.084072 systemd[1]: Running in initrd. Jan 30 13:49:27.084091 systemd[1]: No hostname configured, using default hostname. Jan 30 13:49:27.084106 systemd[1]: Hostname set to . Jan 30 13:49:27.084123 systemd[1]: Initializing machine ID from random generator. Jan 30 13:49:27.084139 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:49:27.084154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:27.084170 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:27.084186 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:49:27.084202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:49:27.084222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:49:27.084237 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:49:27.084256 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:49:27.084272 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:49:27.084288 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:27.084304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:27.084319 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:49:27.084338 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:49:27.084354 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:49:27.084369 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:49:27.084386 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:49:27.084401 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:49:27.084417 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:49:27.084433 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:49:27.084448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:27.084464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:27.084484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:27.084500 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:49:27.084516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:49:27.084532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:49:27.084547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:49:27.084564 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:49:27.084578 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:49:27.084594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:49:27.084614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:27.084657 systemd-journald[176]: Collecting audit messages is disabled. Jan 30 13:49:27.084752 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:49:27.084770 systemd-journald[176]: Journal started Jan 30 13:49:27.084811 systemd-journald[176]: Runtime Journal (/run/log/journal/2be58b61524d4c8293a64c4a49f66b31) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:49:27.078070 systemd-modules-load[177]: Inserted module 'overlay' Jan 30 13:49:27.095341 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:49:27.095980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:27.103738 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:49:27.118552 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:49:27.130017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:49:27.135939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:27.138914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:49:27.146700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:27.165005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:49:27.159823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:49:27.172869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:27.180369 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 30 13:49:27.180700 kernel: Bridge firewalling registered Jan 30 13:49:27.181752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:27.184839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:49:27.203426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:27.209721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:27.215009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:49:27.220320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:27.239836 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:49:27.253052 systemd-resolved[211]: Positive Trust Anchors: Jan 30 13:49:27.253067 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:49:27.253126 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:49:27.277454 dracut-cmdline[213]: dracut-dracut-053 Jan 30 13:49:27.277454 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:49:27.258251 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 30 13:49:27.259821 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:49:27.299665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:27.332708 kernel: SCSI subsystem initialized Jan 30 13:49:27.342704 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:49:27.353706 kernel: iscsi: registered transport (tcp) Jan 30 13:49:27.375459 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:49:27.375542 kernel: QLogic iSCSI HBA Driver Jan 30 13:49:27.412018 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:49:27.420849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:49:27.447245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:49:27.447337 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:49:27.450200 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:49:27.492713 kernel: raid6: avx512x4 gen() 18266 MB/s Jan 30 13:49:27.510703 kernel: raid6: avx512x2 gen() 18156 MB/s Jan 30 13:49:27.529695 kernel: raid6: avx512x1 gen() 18165 MB/s Jan 30 13:49:27.548698 kernel: raid6: avx2x4 gen() 18180 MB/s Jan 30 13:49:27.567697 kernel: raid6: avx2x2 gen() 18124 MB/s Jan 30 13:49:27.587857 kernel: raid6: avx2x1 gen() 13984 MB/s Jan 30 13:49:27.587911 kernel: raid6: using algorithm avx512x4 gen() 18266 MB/s Jan 30 13:49:27.608873 kernel: raid6: .... xor() 6725 MB/s, rmw enabled Jan 30 13:49:27.608916 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:49:27.630719 kernel: xor: automatically using best checksumming function avx Jan 30 13:49:27.782719 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:49:27.792397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:49:27.799983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:27.817285 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 30 13:49:27.823831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:27.836948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:49:27.849863 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 30 13:49:27.879668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:49:27.889883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:49:27.930886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:27.942890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:49:27.969375 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:27.973708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:27.984214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:27.989373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:49:28.006392 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:49:28.019711 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:49:28.046505 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:28.055157 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:28.063856 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:49:28.063892 kernel: AES CTR mode by8 optimization enabled Jan 30 13:49:28.057610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:28.078720 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:49:28.069043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:28.072323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:28.072595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:28.075575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:28.098702 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:49:28.098792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:28.121702 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:49:28.126769 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:49:28.126822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:49:28.140043 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:49:28.140107 kernel: scsi host0: storvsc_host_t Jan 30 13:49:28.140292 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:49:28.145486 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:49:28.149709 kernel: PTP clock support registered Jan 30 13:49:28.153727 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:49:28.153801 kernel: scsi host1: storvsc_host_t Jan 30 13:49:28.158078 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:49:28.161361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:28.176649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:28.191657 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:49:28.192208 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:49:28.192226 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:49:28.199752 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:49:28.207332 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:49:28.207605 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:49:28.209594 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:49:28.212704 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:49:28.818424 systemd-resolved[211]: Clock change detected. Flushing caches. Jan 30 13:49:28.834860 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:49:28.840917 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:49:28.840952 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:49:28.837165 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:28.855299 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:49:28.869811 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:49:28.870041 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:49:28.870222 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:49:28.870411 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:49:28.870586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:28.870607 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:49:28.945403 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: VF slot 1 added Jan 30 13:49:28.954204 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:49:28.959696 kernel: hv_pci c3902b8a-3e78-4c31-a243-2bf4f31719be: PCI VMBus probing: Using version 0x10004 Jan 30 13:49:29.003204 kernel: hv_pci c3902b8a-3e78-4c31-a243-2bf4f31719be: PCI host bridge to bus 3e78:00 Jan 30 13:49:29.003435 kernel: pci_bus 3e78:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:49:29.003610 kernel: pci_bus 3e78:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:49:29.003766 kernel: pci 3e78:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:49:29.003964 kernel: pci 3e78:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:49:29.004159 kernel: pci 3e78:00:02.0: enabling Extended Tags Jan 30 13:49:29.004334 kernel: pci 3e78:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3e78:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:49:29.004505 kernel: pci_bus 3e78:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:49:29.004670 kernel: pci 3e78:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:49:29.176738 kernel: mlx5_core 3e78:00:02.0: enabling device (0000 -> 0002) Jan 30 13:49:29.407088 kernel: mlx5_core 3e78:00:02.0: firmware version: 14.30.5000 Jan 30 13:49:29.407317 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: VF registering: eth1 Jan 30 13:49:29.407484 kernel: mlx5_core 3e78:00:02.0 eth1: joined to eth0 Jan 30 13:49:29.407676 kernel: mlx5_core 3e78:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:49:29.354954 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:49:29.414044 kernel: mlx5_core 3e78:00:02.0 enP15992s1: renamed from eth1 Jan 30 13:49:29.470277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:49:29.484702 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 30 13:49:29.496031 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Jan 30 13:49:29.509843 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:49:29.513321 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:49:29.519528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:49:29.534244 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:49:29.547126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:29.554024 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:30.562032 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:30.562873 disk-uuid[598]: The operation has completed successfully. Jan 30 13:49:30.654214 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:49:30.654332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:49:30.668175 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:49:30.674135 sh[684]: Success Jan 30 13:49:30.708023 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:49:30.903149 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:49:30.917119 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:49:30.921668 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:49:30.944872 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:49:30.944953 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:30.948221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:49:30.950794 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:49:30.953181 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:49:31.326604 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:49:31.327638 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:49:31.337258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:49:31.340808 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:49:31.363445 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:31.363509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:31.365768 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:31.390209 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:31.399493 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:49:31.403427 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:31.410082 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:49:31.428193 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:49:31.440041 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:31.446825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:49:31.469687 systemd-networkd[868]: lo: Link UP Jan 30 13:49:31.469698 systemd-networkd[868]: lo: Gained carrier Jan 30 13:49:31.471826 systemd-networkd[868]: Enumeration completed Jan 30 13:49:31.472111 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:49:31.472989 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:31.472993 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:31.475398 systemd[1]: Reached target network.target - Network. Jan 30 13:49:31.541027 kernel: mlx5_core 3e78:00:02.0 enP15992s1: Link up Jan 30 13:49:31.575158 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: Data path switched to VF: enP15992s1 Jan 30 13:49:31.575580 systemd-networkd[868]: enP15992s1: Link UP Jan 30 13:49:31.575942 systemd-networkd[868]: eth0: Link UP Jan 30 13:49:31.576107 systemd-networkd[868]: eth0: Gained carrier Jan 30 13:49:31.576119 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:31.586264 systemd-networkd[868]: enP15992s1: Gained carrier Jan 30 13:49:31.606057 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:49:32.257492 ignition[853]: Ignition 2.19.0 Jan 30 13:49:32.257504 ignition[853]: Stage: fetch-offline Jan 30 13:49:32.259335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:32.257552 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.257562 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.257684 ignition[853]: parsed url from cmdline: "" Jan 30 13:49:32.257689 ignition[853]: no config URL provided Jan 30 13:49:32.257695 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:32.257711 ignition[853]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:32.277140 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:49:32.257718 ignition[853]: failed to fetch config: resource requires networking Jan 30 13:49:32.257971 ignition[853]: Ignition finished successfully Jan 30 13:49:32.291881 ignition[876]: Ignition 2.19.0 Jan 30 13:49:32.291888 ignition[876]: Stage: fetch Jan 30 13:49:32.292679 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.292693 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.292813 ignition[876]: parsed url from cmdline: "" Jan 30 13:49:32.292820 ignition[876]: no config URL provided Jan 30 13:49:32.292826 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:32.292834 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:32.292856 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:49:32.369249 ignition[876]: GET result: OK Jan 30 13:49:32.369708 ignition[876]: config has been read from IMDS userdata Jan 30 13:49:32.369743 ignition[876]: parsing config with SHA512: e999c99ed0afce266b24cac14caa53daa44e715fd1c7c558d1f03246320547cb1743db0fcbe78fddb64b2b997d8540b1a7f992c753cdc444ed74ca5abb25fe16 Jan 30 13:49:32.376902 unknown[876]: fetched base config from "system" Jan 30 13:49:32.376918 unknown[876]: fetched base config from "system" Jan 30 13:49:32.377354 ignition[876]: fetch: fetch complete Jan 30 13:49:32.376926 unknown[876]: fetched user config from "azure" Jan 30 13:49:32.377361 ignition[876]: fetch: fetch passed Jan 30 13:49:32.383102 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:49:32.377408 ignition[876]: Ignition finished successfully Jan 30 13:49:32.396316 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:49:32.415353 ignition[882]: Ignition 2.19.0 Jan 30 13:49:32.415365 ignition[882]: Stage: kargs Jan 30 13:49:32.415594 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.415607 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.416915 ignition[882]: kargs: kargs passed Jan 30 13:49:32.416965 ignition[882]: Ignition finished successfully Jan 30 13:49:32.427022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:49:32.437167 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:49:32.452616 ignition[888]: Ignition 2.19.0 Jan 30 13:49:32.452628 ignition[888]: Stage: disks Jan 30 13:49:32.454665 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:49:32.452843 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.457894 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:32.452856 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.461368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:49:32.453723 ignition[888]: disks: disks passed Jan 30 13:49:32.453771 ignition[888]: Ignition finished successfully Jan 30 13:49:32.478466 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:49:32.478578 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:49:32.479032 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:49:32.499374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:49:32.574679 systemd-fsck[896]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:49:32.578902 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:49:32.588421 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:49:32.669261 systemd-networkd[868]: eth0: Gained IPv6LL Jan 30 13:49:32.686023 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:49:32.686578 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:49:32.691045 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:49:32.732136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:32.737084 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:49:32.748546 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (908) Jan 30 13:49:32.743196 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:49:32.754676 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:32.760714 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:32.760765 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:32.761465 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:49:32.773702 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:32.761512 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:32.768024 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:49:32.775217 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:49:32.785301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:32.845151 systemd-networkd[868]: enP15992s1: Gained IPv6LL Jan 30 13:49:33.377732 coreos-metadata[910]: Jan 30 13:49:33.377 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:49:33.384008 coreos-metadata[910]: Jan 30 13:49:33.383 INFO Fetch successful Jan 30 13:49:33.386599 coreos-metadata[910]: Jan 30 13:49:33.384 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:49:33.398108 coreos-metadata[910]: Jan 30 13:49:33.398 INFO Fetch successful Jan 30 13:49:33.416046 coreos-metadata[910]: Jan 30 13:49:33.415 INFO wrote hostname ci-4081.3.0-a-38674a3e2a to /sysroot/etc/hostname Jan 30 13:49:33.420303 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:49:33.500566 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:49:33.537361 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:49:33.561618 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:49:33.587708 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:49:34.472441 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:34.482150 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:49:34.490230 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:49:34.499220 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:34.500271 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:49:34.529981 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:49:34.537154 ignition[1027]: INFO : Ignition 2.19.0 Jan 30 13:49:34.537154 ignition[1027]: INFO : Stage: mount Jan 30 13:49:34.539608 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:34.539608 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:34.539608 ignition[1027]: INFO : mount: mount passed Jan 30 13:49:34.539608 ignition[1027]: INFO : Ignition finished successfully Jan 30 13:49:34.539335 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:49:34.558130 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:49:34.572211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:34.584216 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1038) Jan 30 13:49:34.584268 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:34.588021 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:34.591926 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:34.597027 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:34.598739 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:34.626142 ignition[1055]: INFO : Ignition 2.19.0 Jan 30 13:49:34.626142 ignition[1055]: INFO : Stage: files Jan 30 13:49:34.630745 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:34.630745 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:34.630745 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:49:34.662701 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:49:34.662701 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:49:34.750208 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:49:34.753959 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:49:34.757512 unknown[1055]: wrote ssh authorized keys file for user: core Jan 30 13:49:34.760149 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:49:34.802462 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:49:34.807652 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:49:34.846369 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:49:35.059714 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:49:35.059714 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:35.068518 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:35.068518 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:35.076482 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:35.076482 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:35.084539 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:35.088669 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:35.093275 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:35.097482 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:35.101656 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:35.105753 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.111677 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.117215 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.125665 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:49:35.651806 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:49:35.955905 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.955905 ignition[1055]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:49:35.969645 ignition[1055]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: files passed Jan 30 13:49:35.974174 ignition[1055]: INFO : Ignition finished successfully Jan 30 13:49:35.971641 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:49:36.010213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:49:36.016126 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:49:36.027254 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:49:36.027386 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:49:36.060549 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:36.060549 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:36.068078 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:36.074247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:36.074548 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:49:36.091318 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:49:36.113789 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:49:36.113907 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:49:36.125421 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:49:36.127921 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:49:36.132603 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:49:36.146203 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:49:36.160170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:36.167287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:49:36.179786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:36.180103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:36.180613 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:49:36.181128 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:49:36.181269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:36.181814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:49:36.182689 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:49:36.183058 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:49:36.183414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:36.183772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:36.184148 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:49:36.184612 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:36.185042 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:49:36.185414 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:49:36.185796 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:49:36.186153 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:49:36.186282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:36.186938 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:36.187339 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:36.187686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:49:36.220412 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:36.225660 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:49:36.230507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:36.281527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:49:36.281746 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:36.290186 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:49:36.290349 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:49:36.296946 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:49:36.297103 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:49:36.313429 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:49:36.315717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:49:36.317836 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:36.324535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:49:36.330447 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:49:36.330631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:36.342504 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:49:36.349127 ignition[1107]: INFO : Ignition 2.19.0 Jan 30 13:49:36.349127 ignition[1107]: INFO : Stage: umount Jan 30 13:49:36.349127 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:36.349127 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:36.342677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:49:36.362106 ignition[1107]: INFO : umount: umount passed Jan 30 13:49:36.362106 ignition[1107]: INFO : Ignition finished successfully Jan 30 13:49:36.360340 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:49:36.360430 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:49:36.365111 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:49:36.365393 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:49:36.378550 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:49:36.378618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:49:36.385051 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:49:36.385127 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:49:36.391665 systemd[1]: Stopped target network.target - Network. Jan 30 13:49:36.391752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:49:36.391809 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:36.392518 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:49:36.392851 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:49:36.398062 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:36.402859 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:49:36.404922 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:49:36.407236 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:49:36.407292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:49:36.411779 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:49:36.411833 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:49:36.434961 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:49:36.435072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:49:36.439316 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:49:36.439379 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:49:36.444100 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:49:36.448563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:49:36.458069 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 30 13:49:36.460120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:49:36.461069 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:49:36.461194 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:49:36.467520 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:49:36.467654 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:49:36.472398 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:49:36.472486 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:49:36.477408 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:49:36.477497 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:49:36.485277 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:49:36.485353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:36.489065 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:49:36.489138 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:36.510103 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:49:36.514589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:49:36.514664 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:36.520263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:49:36.523106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:36.525529 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:49:36.525575 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:36.530201 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:49:36.530256 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:36.535506 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:36.561491 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:49:36.561675 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:36.570971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:49:36.571070 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:36.573927 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:49:36.573966 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:36.574811 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:49:36.574852 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:49:36.591845 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:49:36.591933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:49:36.596512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:36.596571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:36.609203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:49:36.611706 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:49:36.611782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:36.633162 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: Data path switched from VF: enP15992s1 Jan 30 13:49:36.624328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:36.624406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:36.627703 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:49:36.627808 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:49:36.652862 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:49:36.652998 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:49:36.658094 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:49:36.673286 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:49:36.699059 systemd[1]: Switching root. Jan 30 13:49:36.768717 systemd-journald[176]: Journal stopped Jan 30 13:49:27.075610 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:49:27.075647 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:49:27.075661 kernel: BIOS-provided physical RAM map: Jan 30 13:49:27.075671 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:49:27.075680 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:49:27.075708 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:49:27.075720 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 30 13:49:27.075734 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 30 13:49:27.075744 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:49:27.075758 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:49:27.075769 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:49:27.075786 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:49:27.075806 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:49:27.075826 kernel: NX (Execute Disable) protection: active Jan 30 13:49:27.075850 kernel: APIC: Static calls initialized Jan 30 13:49:27.075862 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:49:27.075875 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 30 13:49:27.075888 kernel: SMBIOS 3.1.0 present. Jan 30 13:49:27.075901 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:49:27.075913 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:49:27.075926 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:49:27.075938 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 30 13:49:27.075950 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:49:27.075963 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:49:27.075977 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:49:27.075990 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:49:27.076003 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:49:27.076016 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:49:27.076028 kernel: tsc: Detected 2593.905 MHz processor Jan 30 13:49:27.076041 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:49:27.076054 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:49:27.076066 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:49:27.076079 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:49:27.076094 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:49:27.076107 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:49:27.076119 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:49:27.076132 kernel: Using GB pages for direct mapping Jan 30 13:49:27.076144 kernel: Secure boot disabled Jan 30 13:49:27.076156 kernel: ACPI: Early table checksum verification disabled Jan 30 13:49:27.076169 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:49:27.076187 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076203 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076216 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:49:27.076230 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:49:27.076243 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076257 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076270 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076286 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076300 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076314 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076327 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:49:27.076340 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:49:27.076354 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:49:27.076367 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:49:27.076380 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:49:27.076397 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:49:27.076410 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:49:27.076423 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:49:27.076437 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:49:27.076451 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:49:27.076464 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:49:27.076478 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:49:27.076491 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:49:27.076504 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:49:27.076520 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:49:27.076533 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:49:27.076547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:49:27.076560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:49:27.076574 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:49:27.076587 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:49:27.076601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:49:27.076615 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:49:27.076628 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:49:27.076644 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:49:27.076658 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:49:27.076671 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:49:27.076692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:49:27.076706 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:49:27.076719 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:49:27.076733 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:49:27.076747 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:49:27.076760 kernel: Zone ranges: Jan 30 13:49:27.076777 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:49:27.076790 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:49:27.076803 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:49:27.076817 kernel: Movable zone start for each node Jan 30 13:49:27.076830 kernel: Early memory node ranges Jan 30 13:49:27.076844 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:49:27.076857 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:49:27.076870 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:49:27.076884 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:49:27.076900 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:49:27.076913 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:49:27.076927 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:49:27.076940 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:49:27.076953 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:49:27.076967 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:49:27.076980 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:49:27.076993 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:49:27.077007 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:49:27.077023 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:49:27.077036 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:49:27.077049 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:49:27.077063 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:49:27.077076 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:49:27.077090 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:49:27.077103 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:49:27.077117 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:49:27.077130 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:49:27.077145 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:49:27.077159 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:49:27.077174 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:49:27.077188 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:49:27.077201 kernel: random: crng init done Jan 30 13:49:27.077214 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:49:27.077228 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:49:27.077241 kernel: Fallback order for Node 0: 0 Jan 30 13:49:27.077257 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:49:27.077280 kernel: Policy zone: Normal Jan 30 13:49:27.077295 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:49:27.077311 kernel: software IO TLB: area num 2. Jan 30 13:49:27.077326 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Jan 30 13:49:27.077340 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:49:27.077354 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:49:27.077369 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:49:27.077383 kernel: Dynamic Preempt: voluntary Jan 30 13:49:27.077397 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:49:27.077413 kernel: rcu: RCU event tracing is enabled. Jan 30 13:49:27.077430 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:49:27.077444 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:49:27.077459 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:49:27.077473 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:49:27.077488 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:49:27.077505 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:49:27.077519 kernel: Using NULL legacy PIC Jan 30 13:49:27.077533 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:49:27.077547 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:49:27.077562 kernel: Console: colour dummy device 80x25 Jan 30 13:49:27.077576 kernel: printk: console [tty1] enabled Jan 30 13:49:27.077590 kernel: printk: console [ttyS0] enabled Jan 30 13:49:27.077605 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:49:27.077624 kernel: ACPI: Core revision 20230628 Jan 30 13:49:27.077638 kernel: Failed to register legacy timer interrupt Jan 30 13:49:27.077655 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:49:27.077669 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:49:27.077697 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:49:27.077710 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:49:27.077736 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:49:27.077764 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:49:27.077786 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:49:27.077797 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:49:27.077810 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:49:27.077829 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jan 30 13:49:27.077843 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:49:27.077857 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:49:27.077869 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:49:27.077880 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:49:27.077892 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:49:27.077906 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:49:27.077920 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:49:27.077934 kernel: RETBleed: Vulnerable Jan 30 13:49:27.077948 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:49:27.077960 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:49:27.077973 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:49:27.077986 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:49:27.077998 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:49:27.078009 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:49:27.078031 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:49:27.078045 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:49:27.078057 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:49:27.078069 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:49:27.078081 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:49:27.078097 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:49:27.078112 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:49:27.078123 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:49:27.078135 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:49:27.078147 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:49:27.078160 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:49:27.078174 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:49:27.078188 kernel: landlock: Up and running. Jan 30 13:49:27.078201 kernel: SELinux: Initializing. Jan 30 13:49:27.078214 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.078228 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.078243 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:49:27.078262 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:49:27.078277 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:49:27.078292 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:49:27.078307 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:49:27.078322 kernel: signal: max sigframe size: 3632 Jan 30 13:49:27.078336 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:49:27.078352 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:49:27.078367 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:49:27.078381 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:49:27.078399 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:49:27.078414 kernel: .... node #0, CPUs: #1 Jan 30 13:49:27.078429 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:49:27.078446 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:49:27.078461 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:49:27.078476 kernel: smpboot: Max logical packages: 1 Jan 30 13:49:27.078491 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:49:27.078506 kernel: devtmpfs: initialized Jan 30 13:49:27.078524 kernel: x86/mm: Memory block size: 128MB Jan 30 13:49:27.078539 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:49:27.078554 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:49:27.078573 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:49:27.078586 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:49:27.078599 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:49:27.078614 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:49:27.078628 kernel: audit: type=2000 audit(1738244966.027:1): state=initialized audit_enabled=0 res=1 Jan 30 13:49:27.078643 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:49:27.078660 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:49:27.078675 kernel: cpuidle: using governor menu Jan 30 13:49:27.078707 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:49:27.078722 kernel: dca service started, version 1.12.1 Jan 30 13:49:27.078737 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:49:27.078752 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:49:27.078767 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:49:27.078781 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:49:27.078792 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:49:27.078808 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:49:27.078820 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:49:27.078833 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:49:27.078846 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:49:27.078859 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:49:27.078874 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:49:27.078888 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:49:27.078900 kernel: ACPI: Interpreter enabled Jan 30 13:49:27.078911 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:49:27.078926 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:49:27.078939 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:49:27.078952 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:49:27.078964 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:49:27.078978 kernel: iommu: Default domain type: Translated Jan 30 13:49:27.078991 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:49:27.079003 kernel: efivars: Registered efivars operations Jan 30 13:49:27.079016 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:49:27.079028 kernel: PCI: System does not support PCI Jan 30 13:49:27.079043 kernel: vgaarb: loaded Jan 30 13:49:27.079057 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:49:27.079071 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:49:27.079084 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:49:27.079098 kernel: pnp: PnP ACPI init Jan 30 13:49:27.079111 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:49:27.079124 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:49:27.079137 kernel: NET: Registered PF_INET protocol family Jan 30 13:49:27.079150 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:49:27.079166 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:49:27.079180 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:49:27.079192 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:49:27.079205 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:49:27.079217 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:49:27.079232 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.079247 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:49:27.079261 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:49:27.079275 kernel: NET: Registered PF_XDP protocol family Jan 30 13:49:27.079294 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:49:27.079308 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:49:27.079323 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 30 13:49:27.079337 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:49:27.079352 kernel: Initialise system trusted keyrings Jan 30 13:49:27.079367 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:49:27.079379 kernel: Key type asymmetric registered Jan 30 13:49:27.079393 kernel: Asymmetric key parser 'x509' registered Jan 30 13:49:27.079406 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:49:27.079423 kernel: io scheduler mq-deadline registered Jan 30 13:49:27.079436 kernel: io scheduler kyber registered Jan 30 13:49:27.079449 kernel: io scheduler bfq registered Jan 30 13:49:27.079461 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:49:27.079474 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:49:27.079489 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:49:27.079502 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:49:27.079516 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:49:27.083134 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:49:27.083325 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:49:26 UTC (1738244966) Jan 30 13:49:27.083446 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:49:27.083464 kernel: intel_pstate: CPU model not supported Jan 30 13:49:27.083480 kernel: efifb: probing for efifb Jan 30 13:49:27.083496 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:49:27.083511 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:49:27.083526 kernel: efifb: scrolling: redraw Jan 30 13:49:27.083546 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:49:27.083559 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:49:27.083572 kernel: fb0: EFI VGA frame buffer device Jan 30 13:49:27.083585 kernel: pstore: Using crash dump compression: deflate Jan 30 13:49:27.083598 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:49:27.083614 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:49:27.083629 kernel: Segment Routing with IPv6 Jan 30 13:49:27.083644 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:49:27.083661 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:49:27.083675 kernel: Key type dns_resolver registered Jan 30 13:49:27.083712 kernel: IPI shorthand broadcast: enabled Jan 30 13:49:27.083725 kernel: sched_clock: Marking stable (824002900, 41204400)->(1061829800, -196622500) Jan 30 13:49:27.083739 kernel: registered taskstats version 1 Jan 30 13:49:27.083753 kernel: Loading compiled-in X.509 certificates Jan 30 13:49:27.083766 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:49:27.083778 kernel: Key type .fscrypt registered Jan 30 13:49:27.083793 kernel: Key type fscrypt-provisioning registered Jan 30 13:49:27.083807 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:49:27.083825 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:49:27.083839 kernel: ima: No architecture policies found Jan 30 13:49:27.083853 kernel: clk: Disabling unused clocks Jan 30 13:49:27.083867 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:49:27.083881 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:49:27.083894 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:49:27.083908 kernel: Run /init as init process Jan 30 13:49:27.083923 kernel: with arguments: Jan 30 13:49:27.083937 kernel: /init Jan 30 13:49:27.083952 kernel: with environment: Jan 30 13:49:27.083976 kernel: HOME=/ Jan 30 13:49:27.083990 kernel: TERM=linux Jan 30 13:49:27.084004 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:49:27.084023 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:49:27.084042 systemd[1]: Detected virtualization microsoft. Jan 30 13:49:27.084058 systemd[1]: Detected architecture x86-64. Jan 30 13:49:27.084072 systemd[1]: Running in initrd. Jan 30 13:49:27.084091 systemd[1]: No hostname configured, using default hostname. Jan 30 13:49:27.084106 systemd[1]: Hostname set to . Jan 30 13:49:27.084123 systemd[1]: Initializing machine ID from random generator. Jan 30 13:49:27.084139 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:49:27.084154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:27.084170 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:27.084186 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:49:27.084202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:49:27.084222 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:49:27.084237 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:49:27.084256 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:49:27.084272 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:49:27.084288 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:27.084304 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:27.084319 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:49:27.084338 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:49:27.084354 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:49:27.084369 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:49:27.084386 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:49:27.084401 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:49:27.084417 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:49:27.084433 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:49:27.084448 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:27.084464 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:27.084484 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:27.084500 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:49:27.084516 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:49:27.084532 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:49:27.084547 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:49:27.084564 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:49:27.084578 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:49:27.084594 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:49:27.084614 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:27.084657 systemd-journald[176]: Collecting audit messages is disabled. Jan 30 13:49:27.084752 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:49:27.084770 systemd-journald[176]: Journal started Jan 30 13:49:27.084811 systemd-journald[176]: Runtime Journal (/run/log/journal/2be58b61524d4c8293a64c4a49f66b31) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:49:27.078070 systemd-modules-load[177]: Inserted module 'overlay' Jan 30 13:49:27.095341 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:49:27.095980 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:27.103738 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:49:27.118552 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:49:27.130017 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:49:27.135939 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:27.138914 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:49:27.146700 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:27.165005 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:49:27.159823 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:49:27.172869 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:27.180369 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 30 13:49:27.180700 kernel: Bridge firewalling registered Jan 30 13:49:27.181752 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:27.184839 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:49:27.203426 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:27.209721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:27.215009 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:49:27.220320 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:27.239836 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:49:27.253052 systemd-resolved[211]: Positive Trust Anchors: Jan 30 13:49:27.253067 systemd-resolved[211]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:49:27.253126 systemd-resolved[211]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:49:27.277454 dracut-cmdline[213]: dracut-dracut-053 Jan 30 13:49:27.277454 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:49:27.258251 systemd-resolved[211]: Defaulting to hostname 'linux'. Jan 30 13:49:27.259821 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:49:27.299665 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:27.332708 kernel: SCSI subsystem initialized Jan 30 13:49:27.342704 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:49:27.353706 kernel: iscsi: registered transport (tcp) Jan 30 13:49:27.375459 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:49:27.375542 kernel: QLogic iSCSI HBA Driver Jan 30 13:49:27.412018 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:49:27.420849 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:49:27.447245 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:49:27.447337 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:49:27.450200 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:49:27.492713 kernel: raid6: avx512x4 gen() 18266 MB/s Jan 30 13:49:27.510703 kernel: raid6: avx512x2 gen() 18156 MB/s Jan 30 13:49:27.529695 kernel: raid6: avx512x1 gen() 18165 MB/s Jan 30 13:49:27.548698 kernel: raid6: avx2x4 gen() 18180 MB/s Jan 30 13:49:27.567697 kernel: raid6: avx2x2 gen() 18124 MB/s Jan 30 13:49:27.587857 kernel: raid6: avx2x1 gen() 13984 MB/s Jan 30 13:49:27.587911 kernel: raid6: using algorithm avx512x4 gen() 18266 MB/s Jan 30 13:49:27.608873 kernel: raid6: .... xor() 6725 MB/s, rmw enabled Jan 30 13:49:27.608916 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:49:27.630719 kernel: xor: automatically using best checksumming function avx Jan 30 13:49:27.782719 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:49:27.792397 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:49:27.799983 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:27.817285 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 30 13:49:27.823831 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:27.836948 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:49:27.849863 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 30 13:49:27.879668 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:49:27.889883 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:49:27.930886 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:27.942890 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:49:27.969375 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:27.973708 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:27.984214 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:27.989373 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:49:28.006392 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:49:28.019711 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:49:28.046505 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:28.055157 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:28.063856 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:49:28.063892 kernel: AES CTR mode by8 optimization enabled Jan 30 13:49:28.057610 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:28.078720 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:49:28.069043 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:28.072323 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:28.072595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:28.075575 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:28.098702 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:49:28.098792 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:28.121702 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:49:28.126769 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:49:28.126822 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:49:28.140043 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 30 13:49:28.140107 kernel: scsi host0: storvsc_host_t Jan 30 13:49:28.140292 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:49:28.145486 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:49:28.149709 kernel: PTP clock support registered Jan 30 13:49:28.153727 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:49:28.153801 kernel: scsi host1: storvsc_host_t Jan 30 13:49:28.158078 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:49:28.161361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:28.176649 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:49:28.191657 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:49:28.192208 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:49:28.192226 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:49:28.199752 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 30 13:49:28.207332 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:49:28.207605 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:49:28.209594 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:49:28.212704 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:49:28.818424 systemd-resolved[211]: Clock change detected. Flushing caches. Jan 30 13:49:28.834860 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:49:28.840917 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:49:28.840952 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:49:28.837165 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:28.855299 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:49:28.869811 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:49:28.870041 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:49:28.870222 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:49:28.870411 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:49:28.870586 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:28.870607 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:49:28.945403 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: VF slot 1 added Jan 30 13:49:28.954204 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:49:28.959696 kernel: hv_pci c3902b8a-3e78-4c31-a243-2bf4f31719be: PCI VMBus probing: Using version 0x10004 Jan 30 13:49:29.003204 kernel: hv_pci c3902b8a-3e78-4c31-a243-2bf4f31719be: PCI host bridge to bus 3e78:00 Jan 30 13:49:29.003435 kernel: pci_bus 3e78:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:49:29.003610 kernel: pci_bus 3e78:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:49:29.003766 kernel: pci 3e78:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:49:29.003964 kernel: pci 3e78:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:49:29.004159 kernel: pci 3e78:00:02.0: enabling Extended Tags Jan 30 13:49:29.004334 kernel: pci 3e78:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 3e78:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:49:29.004505 kernel: pci_bus 3e78:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:49:29.004670 kernel: pci 3e78:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:49:29.176738 kernel: mlx5_core 3e78:00:02.0: enabling device (0000 -> 0002) Jan 30 13:49:29.407088 kernel: mlx5_core 3e78:00:02.0: firmware version: 14.30.5000 Jan 30 13:49:29.407317 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: VF registering: eth1 Jan 30 13:49:29.407484 kernel: mlx5_core 3e78:00:02.0 eth1: joined to eth0 Jan 30 13:49:29.407676 kernel: mlx5_core 3e78:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:49:29.354954 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:49:29.414044 kernel: mlx5_core 3e78:00:02.0 enP15992s1: renamed from eth1 Jan 30 13:49:29.470277 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:49:29.484702 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (454) Jan 30 13:49:29.496031 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (456) Jan 30 13:49:29.509843 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:49:29.513321 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:49:29.519528 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:49:29.534244 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:49:29.547126 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:29.554024 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:30.562032 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:49:30.562873 disk-uuid[598]: The operation has completed successfully. Jan 30 13:49:30.654214 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:49:30.654332 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:49:30.668175 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:49:30.674135 sh[684]: Success Jan 30 13:49:30.708023 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:49:30.903149 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:49:30.917119 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:49:30.921668 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:49:30.944872 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:49:30.944953 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:30.948221 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:49:30.950794 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:49:30.953181 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:49:31.326604 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:49:31.327638 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:49:31.337258 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:49:31.340808 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:49:31.363445 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:31.363509 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:31.365768 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:31.390209 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:31.399493 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:49:31.403427 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:31.410082 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:49:31.428193 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:49:31.440041 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:31.446825 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:49:31.469687 systemd-networkd[868]: lo: Link UP Jan 30 13:49:31.469698 systemd-networkd[868]: lo: Gained carrier Jan 30 13:49:31.471826 systemd-networkd[868]: Enumeration completed Jan 30 13:49:31.472111 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:49:31.472989 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:31.472993 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:31.475398 systemd[1]: Reached target network.target - Network. Jan 30 13:49:31.541027 kernel: mlx5_core 3e78:00:02.0 enP15992s1: Link up Jan 30 13:49:31.575158 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: Data path switched to VF: enP15992s1 Jan 30 13:49:31.575580 systemd-networkd[868]: enP15992s1: Link UP Jan 30 13:49:31.575942 systemd-networkd[868]: eth0: Link UP Jan 30 13:49:31.576107 systemd-networkd[868]: eth0: Gained carrier Jan 30 13:49:31.576119 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:31.586264 systemd-networkd[868]: enP15992s1: Gained carrier Jan 30 13:49:31.606057 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:49:32.257492 ignition[853]: Ignition 2.19.0 Jan 30 13:49:32.257504 ignition[853]: Stage: fetch-offline Jan 30 13:49:32.259335 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:32.257552 ignition[853]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.257562 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.257684 ignition[853]: parsed url from cmdline: "" Jan 30 13:49:32.257689 ignition[853]: no config URL provided Jan 30 13:49:32.257695 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:32.257711 ignition[853]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:32.277140 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:49:32.257718 ignition[853]: failed to fetch config: resource requires networking Jan 30 13:49:32.257971 ignition[853]: Ignition finished successfully Jan 30 13:49:32.291881 ignition[876]: Ignition 2.19.0 Jan 30 13:49:32.291888 ignition[876]: Stage: fetch Jan 30 13:49:32.292679 ignition[876]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.292693 ignition[876]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.292813 ignition[876]: parsed url from cmdline: "" Jan 30 13:49:32.292820 ignition[876]: no config URL provided Jan 30 13:49:32.292826 ignition[876]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:49:32.292834 ignition[876]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:49:32.292856 ignition[876]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:49:32.369249 ignition[876]: GET result: OK Jan 30 13:49:32.369708 ignition[876]: config has been read from IMDS userdata Jan 30 13:49:32.369743 ignition[876]: parsing config with SHA512: e999c99ed0afce266b24cac14caa53daa44e715fd1c7c558d1f03246320547cb1743db0fcbe78fddb64b2b997d8540b1a7f992c753cdc444ed74ca5abb25fe16 Jan 30 13:49:32.376902 unknown[876]: fetched base config from "system" Jan 30 13:49:32.376918 unknown[876]: fetched base config from "system" Jan 30 13:49:32.377354 ignition[876]: fetch: fetch complete Jan 30 13:49:32.376926 unknown[876]: fetched user config from "azure" Jan 30 13:49:32.377361 ignition[876]: fetch: fetch passed Jan 30 13:49:32.383102 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:49:32.377408 ignition[876]: Ignition finished successfully Jan 30 13:49:32.396316 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:49:32.415353 ignition[882]: Ignition 2.19.0 Jan 30 13:49:32.415365 ignition[882]: Stage: kargs Jan 30 13:49:32.415594 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.415607 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.416915 ignition[882]: kargs: kargs passed Jan 30 13:49:32.416965 ignition[882]: Ignition finished successfully Jan 30 13:49:32.427022 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:49:32.437167 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:49:32.452616 ignition[888]: Ignition 2.19.0 Jan 30 13:49:32.452628 ignition[888]: Stage: disks Jan 30 13:49:32.454665 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:49:32.452843 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:32.457894 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:32.452856 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:32.461368 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:49:32.453723 ignition[888]: disks: disks passed Jan 30 13:49:32.453771 ignition[888]: Ignition finished successfully Jan 30 13:49:32.478466 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:49:32.478578 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:49:32.479032 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:49:32.499374 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:49:32.574679 systemd-fsck[896]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:49:32.578902 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:49:32.588421 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:49:32.669261 systemd-networkd[868]: eth0: Gained IPv6LL Jan 30 13:49:32.686023 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:49:32.686578 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:49:32.691045 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:49:32.732136 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:32.737084 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:49:32.748546 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (908) Jan 30 13:49:32.743196 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:49:32.754676 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:32.760714 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:32.760765 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:32.761465 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:49:32.773702 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:32.761512 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:32.768024 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:49:32.775217 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:49:32.785301 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:32.845151 systemd-networkd[868]: enP15992s1: Gained IPv6LL Jan 30 13:49:33.377732 coreos-metadata[910]: Jan 30 13:49:33.377 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:49:33.384008 coreos-metadata[910]: Jan 30 13:49:33.383 INFO Fetch successful Jan 30 13:49:33.386599 coreos-metadata[910]: Jan 30 13:49:33.384 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:49:33.398108 coreos-metadata[910]: Jan 30 13:49:33.398 INFO Fetch successful Jan 30 13:49:33.416046 coreos-metadata[910]: Jan 30 13:49:33.415 INFO wrote hostname ci-4081.3.0-a-38674a3e2a to /sysroot/etc/hostname Jan 30 13:49:33.420303 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:49:33.500566 initrd-setup-root[938]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:49:33.537361 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:49:33.561618 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:49:33.587708 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:49:34.472441 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:34.482150 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:49:34.490230 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:49:34.499220 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:34.500271 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:49:34.529981 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:49:34.537154 ignition[1027]: INFO : Ignition 2.19.0 Jan 30 13:49:34.537154 ignition[1027]: INFO : Stage: mount Jan 30 13:49:34.539608 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:34.539608 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:34.539608 ignition[1027]: INFO : mount: mount passed Jan 30 13:49:34.539608 ignition[1027]: INFO : Ignition finished successfully Jan 30 13:49:34.539335 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:49:34.558130 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:49:34.572211 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:49:34.584216 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1038) Jan 30 13:49:34.584268 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:49:34.588021 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:49:34.591926 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:49:34.597027 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:49:34.598739 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:49:34.626142 ignition[1055]: INFO : Ignition 2.19.0 Jan 30 13:49:34.626142 ignition[1055]: INFO : Stage: files Jan 30 13:49:34.630745 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:34.630745 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:34.630745 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:49:34.662701 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:49:34.662701 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:49:34.750208 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:49:34.753959 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:49:34.757512 unknown[1055]: wrote ssh authorized keys file for user: core Jan 30 13:49:34.760149 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:49:34.802462 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:49:34.807652 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:49:34.846369 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:49:35.059714 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:49:35.059714 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:35.068518 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:49:35.068518 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:35.076482 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:49:35.076482 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:35.084539 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:49:35.088669 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:35.093275 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:49:35.097482 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:35.101656 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:49:35.105753 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.111677 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.117215 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.125665 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 30 13:49:35.651806 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:49:35.955905 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 30 13:49:35.955905 ignition[1055]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 30 13:49:35.969645 ignition[1055]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:49:35.974174 ignition[1055]: INFO : files: files passed Jan 30 13:49:35.974174 ignition[1055]: INFO : Ignition finished successfully Jan 30 13:49:35.971641 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:49:36.010213 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:49:36.016126 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:49:36.027254 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:49:36.027386 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:49:36.060549 initrd-setup-root-after-ignition[1083]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:36.060549 initrd-setup-root-after-ignition[1083]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:36.068078 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:49:36.074247 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:36.074548 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:49:36.091318 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:49:36.113789 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:49:36.113907 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:49:36.125421 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:49:36.127921 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:49:36.132603 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:49:36.146203 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:49:36.160170 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:36.167287 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:49:36.179786 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:36.180103 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:36.180613 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:49:36.181128 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:49:36.181269 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:49:36.181814 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:49:36.182689 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:49:36.183058 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:49:36.183414 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:49:36.183772 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:49:36.184148 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:49:36.184612 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:49:36.185042 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:49:36.185414 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:49:36.185796 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:49:36.186153 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:49:36.186282 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:49:36.186938 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:36.187339 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:36.187686 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:49:36.220412 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:36.225660 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:49:36.230507 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:49:36.281527 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:49:36.281746 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:49:36.290186 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:49:36.290349 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:49:36.296946 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:49:36.297103 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:49:36.313429 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:49:36.315717 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:49:36.317836 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:36.324535 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:49:36.330447 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:49:36.330631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:36.342504 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:49:36.349127 ignition[1107]: INFO : Ignition 2.19.0 Jan 30 13:49:36.349127 ignition[1107]: INFO : Stage: umount Jan 30 13:49:36.349127 ignition[1107]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:49:36.349127 ignition[1107]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:49:36.342677 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:49:36.362106 ignition[1107]: INFO : umount: umount passed Jan 30 13:49:36.362106 ignition[1107]: INFO : Ignition finished successfully Jan 30 13:49:36.360340 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:49:36.360430 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:49:36.365111 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:49:36.365393 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:49:36.378550 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:49:36.378618 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:49:36.385051 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:49:36.385127 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:49:36.391665 systemd[1]: Stopped target network.target - Network. Jan 30 13:49:36.391752 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:49:36.391809 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:49:36.392518 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:49:36.392851 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:49:36.398062 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:36.402859 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:49:36.404922 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:49:36.407236 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:49:36.407292 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:49:36.411779 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:49:36.411833 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:49:36.434961 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:49:36.435072 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:49:36.439316 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:49:36.439379 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:49:36.444100 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:49:36.448563 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:49:36.458069 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 30 13:49:36.460120 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:49:36.461069 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:49:36.461194 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:49:36.467520 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:49:36.467654 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:49:36.472398 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:49:36.472486 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:49:36.477408 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:49:36.477497 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:49:36.485277 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:49:36.485353 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:36.489065 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:49:36.489138 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:49:36.510103 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:49:36.514589 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:49:36.514664 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:49:36.520263 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:49:36.523106 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:36.525529 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:49:36.525575 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:36.530201 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:49:36.530256 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:36.535506 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:36.561491 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:49:36.561675 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:36.570971 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:49:36.571070 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:36.573927 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:49:36.573966 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:36.574811 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:49:36.574852 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:49:36.591845 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:49:36.591933 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:49:36.596512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:49:36.596571 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:49:36.609203 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:49:36.611706 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:49:36.611782 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:36.633162 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: Data path switched from VF: enP15992s1 Jan 30 13:49:36.624328 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:49:36.624406 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:36.627703 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:49:36.627808 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:49:36.652862 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:49:36.652998 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:49:36.658094 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:49:36.673286 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:49:36.699059 systemd[1]: Switching root. Jan 30 13:49:36.768717 systemd-journald[176]: Journal stopped Jan 30 13:49:42.953334 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 30 13:49:42.953375 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:49:42.953393 kernel: SELinux: policy capability open_perms=1 Jan 30 13:49:42.953407 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:49:42.953420 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:49:42.953434 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:49:42.953449 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:49:42.953466 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:49:42.953480 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:49:42.953495 kernel: audit: type=1403 audit(1738244978.732:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:49:42.953511 systemd[1]: Successfully loaded SELinux policy in 137.579ms. Jan 30 13:49:42.953528 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.735ms. Jan 30 13:49:42.953545 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:49:42.953561 systemd[1]: Detected virtualization microsoft. Jan 30 13:49:42.953581 systemd[1]: Detected architecture x86-64. Jan 30 13:49:42.953597 systemd[1]: Detected first boot. Jan 30 13:49:42.953614 systemd[1]: Hostname set to . Jan 30 13:49:42.953630 systemd[1]: Initializing machine ID from random generator. Jan 30 13:49:42.953646 zram_generator::config[1150]: No configuration found. Jan 30 13:49:42.953666 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:49:42.953683 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:49:42.953699 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:49:42.953715 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:49:42.953732 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:49:42.953748 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:49:42.953766 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:49:42.953785 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:49:42.953803 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:49:42.953819 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:49:42.953836 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:49:42.953853 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:49:42.953870 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:49:42.953889 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:49:42.953905 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:49:42.953925 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:49:42.953942 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:49:42.953959 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:49:42.953976 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:49:42.953993 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:49:42.967055 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:49:42.967093 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:49:42.967112 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:49:42.967133 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:49:42.967151 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:49:42.967169 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:49:42.967187 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:49:42.967204 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:49:42.967222 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:49:42.967239 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:49:42.967259 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:49:42.967277 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:49:42.967296 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:49:42.967315 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:49:42.967332 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:49:42.967353 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:49:42.967371 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:49:42.967388 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:42.967406 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:49:42.967424 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:49:42.967442 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:49:42.967461 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:49:42.967479 systemd[1]: Reached target machines.target - Containers. Jan 30 13:49:42.967499 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:49:42.967516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:49:42.967534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:49:42.967552 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:49:42.967569 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:49:42.967587 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:49:42.967606 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:49:42.967623 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:49:42.967641 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:49:42.967662 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:49:42.967680 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:49:42.967698 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:49:42.967716 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:49:42.967733 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:49:42.967751 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:49:42.967769 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:49:42.967786 kernel: loop: module loaded Jan 30 13:49:42.967805 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:49:42.967824 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:49:42.967843 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:49:42.967860 kernel: fuse: init (API version 7.39) Jan 30 13:49:42.967877 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:49:42.967924 systemd-journald[1253]: Collecting audit messages is disabled. Jan 30 13:49:42.967965 systemd[1]: Stopped verity-setup.service. Jan 30 13:49:42.967983 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:42.968010 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:49:42.968029 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:49:42.968047 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:49:42.968064 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:49:42.968083 systemd-journald[1253]: Journal started Jan 30 13:49:42.968121 systemd-journald[1253]: Runtime Journal (/run/log/journal/ddd270d6e4e341d2a38608c05eb7de33) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:49:42.245091 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:49:42.350500 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:49:42.350883 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:49:42.976018 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:49:42.979561 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:49:42.982661 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:49:42.986839 kernel: ACPI: bus type drm_connector registered Jan 30 13:49:42.987533 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:49:42.990529 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:49:42.993838 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:49:42.994069 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:49:42.997121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:49:42.997282 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:49:43.000147 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:49:43.000296 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:49:43.003387 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:49:43.003541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:49:43.006510 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:49:43.006667 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:49:43.009439 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:49:43.009594 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:49:43.012379 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:49:43.015336 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:49:43.018753 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:49:43.031155 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:49:43.040148 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:49:43.047147 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:49:43.050489 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:49:43.050532 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:49:43.054793 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:49:43.067304 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:49:43.076233 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:49:43.078832 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:49:43.105668 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:49:43.110409 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:49:43.113594 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:49:43.117136 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:49:43.119879 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:49:43.125157 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:49:43.132196 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:49:43.141211 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:49:43.147105 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:49:43.150690 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:49:43.155217 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:49:43.163281 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:49:43.178189 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:49:43.189443 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:49:43.192772 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:49:43.198205 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:49:43.209239 systemd-journald[1253]: Time spent on flushing to /var/log/journal/ddd270d6e4e341d2a38608c05eb7de33 is 23.481ms for 961 entries. Jan 30 13:49:43.209239 systemd-journald[1253]: System Journal (/var/log/journal/ddd270d6e4e341d2a38608c05eb7de33) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:49:43.250242 systemd-journald[1253]: Received client request to flush runtime journal. Jan 30 13:49:43.250306 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:49:43.214204 udevadm[1294]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:49:43.251449 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:49:43.254961 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:49:43.276243 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:49:43.278646 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:49:43.448116 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:49:43.459162 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:49:43.546497 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jan 30 13:49:43.546524 systemd-tmpfiles[1304]: ACLs are not supported, ignoring. Jan 30 13:49:43.552937 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:49:43.631471 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:49:43.677039 kernel: loop1: detected capacity change from 0 to 31056 Jan 30 13:49:44.099034 kernel: loop2: detected capacity change from 0 to 140768 Jan 30 13:49:44.464787 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:49:44.473202 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:49:44.501058 systemd-udevd[1311]: Using default interface naming scheme 'v255'. Jan 30 13:49:44.540032 kernel: loop3: detected capacity change from 0 to 205544 Jan 30 13:49:44.574030 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:49:44.587023 kernel: loop5: detected capacity change from 0 to 31056 Jan 30 13:49:44.603279 kernel: loop6: detected capacity change from 0 to 140768 Jan 30 13:49:44.615038 kernel: loop7: detected capacity change from 0 to 205544 Jan 30 13:49:44.620027 (sd-merge)[1314]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:49:44.620573 (sd-merge)[1314]: Merged extensions into '/usr'. Jan 30 13:49:44.624039 systemd[1]: Reloading requested from client PID 1286 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:49:44.624055 systemd[1]: Reloading... Jan 30 13:49:44.693145 zram_generator::config[1340]: No configuration found. Jan 30 13:49:44.837790 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:45.010495 systemd[1]: Reloading finished in 385 ms. Jan 30 13:49:45.043359 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:49:45.051148 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:49:45.054491 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:49:45.071974 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:49:45.079195 systemd[1]: Starting ensure-sysext.service... Jan 30 13:49:45.090458 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:49:45.105317 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:49:45.105398 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:49:45.106670 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:49:45.158232 systemd[1]: Reloading requested from client PID 1435 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:49:45.158253 systemd[1]: Reloading... Jan 30 13:49:45.164635 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:49:45.169859 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:49:45.171058 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:49:45.178980 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:49:45.179065 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:49:45.180320 systemd-tmpfiles[1438]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:49:45.184774 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Jan 30 13:49:45.187758 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:49:45.187118 systemd-tmpfiles[1438]: ACLs are not supported, ignoring. Jan 30 13:49:45.191868 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:49:45.214783 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:49:45.217025 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1406) Jan 30 13:49:45.217191 systemd-tmpfiles[1438]: Skipping /boot Jan 30 13:49:45.336536 systemd-tmpfiles[1438]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:49:45.340117 systemd-tmpfiles[1438]: Skipping /boot Jan 30 13:49:45.421408 zram_generator::config[1484]: No configuration found. Jan 30 13:49:45.610045 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 30 13:49:45.697817 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:45.796266 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:49:45.800158 systemd[1]: Reloading finished in 641 ms. Jan 30 13:49:45.827850 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:49:45.867176 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:45.872288 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:49:45.879453 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:49:45.883198 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:49:45.887465 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:49:45.893800 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:49:45.898529 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:49:45.904279 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:49:45.909179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:49:45.916065 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:49:45.921667 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:49:45.933434 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:49:45.937220 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:49:45.949298 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:49:45.959986 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:49:45.964346 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:49:45.968921 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:49:45.972404 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:49:45.976409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:49:45.976612 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:49:45.980407 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:49:45.980599 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:49:45.984062 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:49:45.984293 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:49:45.987981 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:49:45.988229 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:49:45.993772 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:49:46.005815 systemd[1]: Finished ensure-sysext.service. Jan 30 13:49:46.031279 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:49:46.034278 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:49:46.034373 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:49:46.034682 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:49:46.041764 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:49:46.145033 systemd-resolved[1572]: Positive Trust Anchors: Jan 30 13:49:46.146421 systemd-resolved[1572]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:49:46.146625 systemd-resolved[1572]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:49:46.155317 lvm[1590]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:49:46.191152 systemd-resolved[1572]: Using system hostname 'ci-4081.3.0-a-38674a3e2a'. Jan 30 13:49:46.195231 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:49:46.197838 augenrules[1608]: No rules Jan 30 13:49:46.199090 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:49:46.202496 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:49:46.205513 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:49:46.210831 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:49:46.211052 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:49:46.217587 systemd-networkd[1436]: lo: Link UP Jan 30 13:49:46.217592 systemd-networkd[1436]: lo: Gained carrier Jan 30 13:49:46.222248 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:49:46.223521 systemd-networkd[1436]: Enumeration completed Jan 30 13:49:46.223890 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:46.223894 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:46.224984 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:49:46.225317 systemd[1]: Reached target network.target - Network. Jan 30 13:49:46.228160 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:49:46.238654 lvm[1616]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:49:46.272332 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:49:46.282054 kernel: mlx5_core 3e78:00:02.0 enP15992s1: Link up Jan 30 13:49:46.304024 kernel: hv_netvsc 000d3ab6-20da-000d-3ab6-20da000d3ab6 eth0: Data path switched to VF: enP15992s1 Jan 30 13:49:46.305141 systemd-networkd[1436]: enP15992s1: Link UP Jan 30 13:49:46.305325 systemd-networkd[1436]: eth0: Link UP Jan 30 13:49:46.305330 systemd-networkd[1436]: eth0: Gained carrier Jan 30 13:49:46.305360 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:46.310348 systemd-networkd[1436]: enP15992s1: Gained carrier Jan 30 13:49:46.332087 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:49:46.640236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:49:46.882562 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:49:46.886083 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:49:48.141268 systemd-networkd[1436]: eth0: Gained IPv6LL Jan 30 13:49:48.144182 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:49:48.148868 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:49:48.205304 systemd-networkd[1436]: enP15992s1: Gained IPv6LL Jan 30 13:49:49.615721 ldconfig[1281]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:49:49.628644 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:49:49.637277 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:49:49.665503 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:49:49.669239 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:49:49.672595 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:49:49.676029 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:49:49.679218 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:49:49.681731 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:49:49.684657 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:49:49.687656 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:49:49.687720 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:49:49.689968 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:49:49.693111 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:49:49.697119 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:49:49.728823 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:49:49.732512 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:49:49.735457 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:49:49.738040 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:49:49.740406 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:49:49.740446 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:49:49.787164 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:49:49.794167 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:49:49.802179 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:49:49.810216 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:49:49.819131 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:49:49.825117 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:49:49.828046 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:49:49.828111 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:49:49.833104 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:49:49.836217 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:49:49.840174 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:49.853202 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:49:49.854540 jq[1633]: false Jan 30 13:49:49.858209 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:49:49.863142 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:49:49.864345 (chronyd)[1629]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:49:49.876884 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:49:49.888164 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:49:49.902783 chronyd[1648]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:49:49.913251 chronyd[1648]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:49:49.904396 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:49:49.923354 extend-filesystems[1634]: Found loop4 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found loop5 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found loop6 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found loop7 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda1 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda2 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda3 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found usr Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda4 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda6 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda7 Jan 30 13:49:49.923354 extend-filesystems[1634]: Found sda9 Jan 30 13:49:49.923354 extend-filesystems[1634]: Checking size of /dev/sda9 Jan 30 13:49:49.913485 chronyd[1648]: Loaded seccomp filter (level 2) Jan 30 13:49:49.907824 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:49:49.949112 KVP[1635]: KVP starting; pid is:1635 Jan 30 13:49:49.911275 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:49:49.920439 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:49:49.937992 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:49:49.955555 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:49:49.974627 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:49:49.975191 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:49:49.984034 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:49:49.984140 KVP[1635]: KVP LIC Version: 3.1 Jan 30 13:49:49.989694 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:49:49.989921 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:49:49.993732 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:49:49.998574 dbus-daemon[1632]: [system] SELinux support is enabled Jan 30 13:49:50.002289 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:49:50.008964 extend-filesystems[1634]: Old size kept for /dev/sda9 Jan 30 13:49:50.011818 extend-filesystems[1634]: Found sr0 Jan 30 13:49:50.014685 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:49:50.014948 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:49:50.036795 jq[1654]: true Jan 30 13:49:50.057047 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:49:50.057652 (ntainerd)[1673]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:49:50.057968 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:49:50.066892 systemd-logind[1646]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:49:50.067450 systemd-logind[1646]: New seat seat0. Jan 30 13:49:50.069793 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:49:50.073600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:49:50.073696 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:49:50.078993 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:49:50.082126 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:49:50.085394 dbus-daemon[1632]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:49:50.092802 jq[1677]: true Jan 30 13:49:50.145564 tar[1665]: linux-amd64/helm Jan 30 13:49:50.155564 update_engine[1649]: I20250130 13:49:50.155459 1649 main.cc:92] Flatcar Update Engine starting Jan 30 13:49:50.159397 coreos-metadata[1631]: Jan 30 13:49:50.157 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:49:50.159434 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:49:50.168296 update_engine[1649]: I20250130 13:49:50.168234 1649 update_check_scheduler.cc:74] Next update check in 8m1s Jan 30 13:49:50.171086 coreos-metadata[1631]: Jan 30 13:49:50.171 INFO Fetch successful Jan 30 13:49:50.172575 coreos-metadata[1631]: Jan 30 13:49:50.172 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:49:50.176211 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:49:50.188057 coreos-metadata[1631]: Jan 30 13:49:50.187 INFO Fetch successful Jan 30 13:49:50.189355 coreos-metadata[1631]: Jan 30 13:49:50.189 INFO Fetching http://168.63.129.16/machine/056fe9e7-b937-431b-bb5a-87e7c5ed4958/f56fa266%2Dc862%2D40be%2D87fc%2D08f28bb39672.%5Fci%2D4081.3.0%2Da%2D38674a3e2a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:49:50.195211 coreos-metadata[1631]: Jan 30 13:49:50.193 INFO Fetch successful Jan 30 13:49:50.195211 coreos-metadata[1631]: Jan 30 13:49:50.194 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:49:50.208661 coreos-metadata[1631]: Jan 30 13:49:50.208 INFO Fetch successful Jan 30 13:49:50.249026 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1697) Jan 30 13:49:50.302389 bash[1720]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:49:50.305365 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:49:50.308798 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:49:50.313501 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:49:50.319832 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:49:50.684972 locksmithd[1710]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:49:51.171373 sshd_keygen[1702]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:49:51.204390 tar[1665]: linux-amd64/LICENSE Jan 30 13:49:51.204390 tar[1665]: linux-amd64/README.md Jan 30 13:49:51.224580 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:49:51.230313 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:49:51.243973 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:49:51.250206 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:49:51.263652 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:49:51.263892 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:49:51.279644 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:49:51.307622 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:49:51.321256 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:49:51.330803 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:49:51.334187 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:49:51.343298 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:49:51.350519 containerd[1673]: time="2025-01-30T13:49:51.350420400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:49:51.388116 containerd[1673]: time="2025-01-30T13:49:51.387908600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.389792200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.389841200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.389864800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390068800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390097000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390183000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390201200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390426400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390448600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390469200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390577 containerd[1673]: time="2025-01-30T13:49:51.390483300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390998 containerd[1673]: time="2025-01-30T13:49:51.390580500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.390998 containerd[1673]: time="2025-01-30T13:49:51.390868300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:49:51.391366 containerd[1673]: time="2025-01-30T13:49:51.391095600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:49:51.391366 containerd[1673]: time="2025-01-30T13:49:51.391124200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:49:51.391366 containerd[1673]: time="2025-01-30T13:49:51.391237200Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:49:51.391366 containerd[1673]: time="2025-01-30T13:49:51.391291200Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:49:51.401336 containerd[1673]: time="2025-01-30T13:49:51.401298100Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:49:51.401427 containerd[1673]: time="2025-01-30T13:49:51.401381200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:49:51.401427 containerd[1673]: time="2025-01-30T13:49:51.401405400Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:49:51.401495 containerd[1673]: time="2025-01-30T13:49:51.401463500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:49:51.401495 containerd[1673]: time="2025-01-30T13:49:51.401488200Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:49:51.401909 containerd[1673]: time="2025-01-30T13:49:51.401668500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:49:51.402251 containerd[1673]: time="2025-01-30T13:49:51.402226200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:49:51.402486 containerd[1673]: time="2025-01-30T13:49:51.402439300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:49:51.402486 containerd[1673]: time="2025-01-30T13:49:51.402465400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:49:51.402580 containerd[1673]: time="2025-01-30T13:49:51.402494900Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:49:51.402580 containerd[1673]: time="2025-01-30T13:49:51.402530400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402580 containerd[1673]: time="2025-01-30T13:49:51.402551400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402580 containerd[1673]: time="2025-01-30T13:49:51.402569800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402590000Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402610100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402629400Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402646800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402664300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402692800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402721 containerd[1673]: time="2025-01-30T13:49:51.402712700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402732600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402752000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402770300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402804600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402824700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402842600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402868600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402892200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402912200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402929200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.402959 containerd[1673]: time="2025-01-30T13:49:51.402948300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.402978100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403025600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403044100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403059900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403128600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403155400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403248400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403269800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403284300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403301900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403315400Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:49:51.403349 containerd[1673]: time="2025-01-30T13:49:51.403330400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:49:51.404890 containerd[1673]: time="2025-01-30T13:49:51.403717100Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:49:51.404890 containerd[1673]: time="2025-01-30T13:49:51.403803700Z" level=info msg="Connect containerd service" Jan 30 13:49:51.404890 containerd[1673]: time="2025-01-30T13:49:51.403846300Z" level=info msg="using legacy CRI server" Jan 30 13:49:51.404890 containerd[1673]: time="2025-01-30T13:49:51.403856600Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:49:51.404890 containerd[1673]: time="2025-01-30T13:49:51.403981700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:49:51.404890 containerd[1673]: time="2025-01-30T13:49:51.404857900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:49:51.405287 containerd[1673]: time="2025-01-30T13:49:51.405026800Z" level=info msg="Start subscribing containerd event" Jan 30 13:49:51.405287 containerd[1673]: time="2025-01-30T13:49:51.405088100Z" level=info msg="Start recovering state" Jan 30 13:49:51.405287 containerd[1673]: time="2025-01-30T13:49:51.405174000Z" level=info msg="Start event monitor" Jan 30 13:49:51.405287 containerd[1673]: time="2025-01-30T13:49:51.405194000Z" level=info msg="Start snapshots syncer" Jan 30 13:49:51.405287 containerd[1673]: time="2025-01-30T13:49:51.405205400Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:49:51.405287 containerd[1673]: time="2025-01-30T13:49:51.405218300Z" level=info msg="Start streaming server" Jan 30 13:49:51.405736 containerd[1673]: time="2025-01-30T13:49:51.405688900Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:49:51.405812 containerd[1673]: time="2025-01-30T13:49:51.405746800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:49:51.407193 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:49:51.410359 containerd[1673]: time="2025-01-30T13:49:51.407891300Z" level=info msg="containerd successfully booted in 0.058537s" Jan 30 13:49:51.729032 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:51.732662 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:49:51.735366 systemd[1]: Startup finished in 837ms (firmware) + 29.657s (loader) + 962ms (kernel) + 11.291s (initrd) + 13.138s (userspace) = 55.886s. Jan 30 13:49:51.744530 (kubelet)[1795]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:49:52.159398 login[1782]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:49:52.160496 login[1784]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:49:52.176826 systemd-logind[1646]: New session 1 of user core. Jan 30 13:49:52.177686 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:49:52.184295 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:49:52.193675 systemd-logind[1646]: New session 2 of user core. Jan 30 13:49:52.207016 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:49:52.214350 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:49:52.222092 (systemd)[1807]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:49:52.373291 kubelet[1795]: E0130 13:49:52.373193 1795 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:49:52.375728 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:49:52.375929 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:49:52.526412 systemd[1807]: Queued start job for default target default.target. Jan 30 13:49:52.530118 systemd[1807]: Created slice app.slice - User Application Slice. Jan 30 13:49:52.530157 systemd[1807]: Reached target paths.target - Paths. Jan 30 13:49:52.530175 systemd[1807]: Reached target timers.target - Timers. Jan 30 13:49:52.531443 systemd[1807]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:49:52.543199 systemd[1807]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:49:52.543343 systemd[1807]: Reached target sockets.target - Sockets. Jan 30 13:49:52.543365 systemd[1807]: Reached target basic.target - Basic System. Jan 30 13:49:52.543408 systemd[1807]: Reached target default.target - Main User Target. Jan 30 13:49:52.543444 systemd[1807]: Startup finished in 312ms. Jan 30 13:49:52.543933 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:49:52.552182 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:49:52.553190 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:49:53.107082 waagent[1785]: 2025-01-30T13:49:53.106952Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.107506Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.108473Z INFO Daemon Daemon Python: 3.11.9 Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.109763Z INFO Daemon Daemon Run daemon Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.110483Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.110826Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.111810Z INFO Daemon Daemon Activate resource disk Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.112517Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.117090Z INFO Daemon Daemon Found device: None Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.117818Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.118886Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.121095Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:49:53.140453 waagent[1785]: 2025-01-30T13:49:53.121839Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:49:53.143850 waagent[1785]: 2025-01-30T13:49:53.143757Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:49:53.153906 waagent[1785]: 2025-01-30T13:49:53.153828Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:49:53.158184 waagent[1785]: 2025-01-30T13:49:53.158090Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:49:53.161917 waagent[1785]: 2025-01-30T13:49:53.158279Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:49:53.236031 waagent[1785]: 2025-01-30T13:49:53.232291Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:49:53.248097 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:49:53.250126 waagent[1785]: 2025-01-30T13:49:53.249994Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:49:53.266470 waagent[1785]: 2025-01-30T13:49:53.250540Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:49:53.266470 waagent[1785]: 2025-01-30T13:49:53.251470Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:49:53.266470 waagent[1785]: 2025-01-30T13:49:53.252238Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:49:53.266470 waagent[1785]: 2025-01-30T13:49:53.253209Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:49:53.266470 waagent[1785]: 2025-01-30T13:49:53.253849Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:49:53.292928 waagent[1785]: 2025-01-30T13:49:53.292852Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:49:53.300207 waagent[1785]: 2025-01-30T13:49:53.293414Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:49:53.300207 waagent[1785]: 2025-01-30T13:49:53.294677Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:49:53.414996 waagent[1785]: 2025-01-30T13:49:53.414828Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:49:53.418826 waagent[1785]: 2025-01-30T13:49:53.418760Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:49:53.425445 waagent[1785]: 2025-01-30T13:49:53.425384Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:49:53.441369 waagent[1785]: 2025-01-30T13:49:53.441305Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 13:49:53.455062 waagent[1785]: 2025-01-30T13:49:53.442044Z INFO Daemon Jan 30 13:49:53.455062 waagent[1785]: 2025-01-30T13:49:53.442807Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f84ae0db-fe5d-4b49-906f-23712968ec3a eTag: 7974996036106101131 source: Fabric] Jan 30 13:49:53.455062 waagent[1785]: 2025-01-30T13:49:53.443843Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:49:53.455062 waagent[1785]: 2025-01-30T13:49:53.444856Z INFO Daemon Jan 30 13:49:53.455062 waagent[1785]: 2025-01-30T13:49:53.445493Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:49:53.457803 waagent[1785]: 2025-01-30T13:49:53.457750Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:49:53.533674 waagent[1785]: 2025-01-30T13:49:53.533579Z INFO Daemon Downloaded certificate {'thumbprint': '5239694AFF919547798FF6478209D6DE06CDD8B0', 'hasPrivateKey': True} Jan 30 13:49:53.538374 waagent[1785]: 2025-01-30T13:49:53.538313Z INFO Daemon Downloaded certificate {'thumbprint': '09C2254D88A61BEE815FA333582E5D8371E2C5DF', 'hasPrivateKey': False} Jan 30 13:49:53.542803 waagent[1785]: 2025-01-30T13:49:53.542746Z INFO Daemon Fetch goal state completed Jan 30 13:49:53.551873 waagent[1785]: 2025-01-30T13:49:53.551822Z INFO Daemon Daemon Starting provisioning Jan 30 13:49:53.554325 waagent[1785]: 2025-01-30T13:49:53.554213Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:49:53.558541 waagent[1785]: 2025-01-30T13:49:53.554385Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-38674a3e2a] Jan 30 13:49:53.575932 waagent[1785]: 2025-01-30T13:49:53.575832Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-38674a3e2a] Jan 30 13:49:53.583527 waagent[1785]: 2025-01-30T13:49:53.576470Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:49:53.583527 waagent[1785]: 2025-01-30T13:49:53.577737Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:49:53.602349 systemd-networkd[1436]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:49:53.602360 systemd-networkd[1436]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:49:53.602416 systemd-networkd[1436]: eth0: DHCP lease lost Jan 30 13:49:53.603734 waagent[1785]: 2025-01-30T13:49:53.603618Z INFO Daemon Daemon Create user account if not exists Jan 30 13:49:53.618202 waagent[1785]: 2025-01-30T13:49:53.604125Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:49:53.618202 waagent[1785]: 2025-01-30T13:49:53.604892Z INFO Daemon Daemon Configure sudoer Jan 30 13:49:53.618202 waagent[1785]: 2025-01-30T13:49:53.605996Z INFO Daemon Daemon Configure sshd Jan 30 13:49:53.618202 waagent[1785]: 2025-01-30T13:49:53.607167Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:49:53.618202 waagent[1785]: 2025-01-30T13:49:53.607738Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:49:53.621157 systemd-networkd[1436]: eth0: DHCPv6 lease lost Jan 30 13:49:53.674197 systemd-networkd[1436]: eth0: DHCPv4 address 10.200.8.14/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:49:54.725547 waagent[1785]: 2025-01-30T13:49:54.725447Z INFO Daemon Daemon Provisioning complete Jan 30 13:49:54.738306 waagent[1785]: 2025-01-30T13:49:54.738249Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:49:54.744931 waagent[1785]: 2025-01-30T13:49:54.738572Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:49:54.744931 waagent[1785]: 2025-01-30T13:49:54.739875Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:49:54.865526 waagent[1864]: 2025-01-30T13:49:54.865419Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:49:54.865985 waagent[1864]: 2025-01-30T13:49:54.865599Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 30 13:49:54.865985 waagent[1864]: 2025-01-30T13:49:54.865681Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 30 13:49:54.921714 waagent[1864]: 2025-01-30T13:49:54.921599Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:49:54.922021 waagent[1864]: 2025-01-30T13:49:54.921941Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:49:54.922145 waagent[1864]: 2025-01-30T13:49:54.922090Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:49:54.930200 waagent[1864]: 2025-01-30T13:49:54.930133Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:49:54.935101 waagent[1864]: 2025-01-30T13:49:54.935046Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 13:49:54.935568 waagent[1864]: 2025-01-30T13:49:54.935516Z INFO ExtHandler Jan 30 13:49:54.935655 waagent[1864]: 2025-01-30T13:49:54.935606Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 413a3351-d306-4d70-86e1-0d36380171a2 eTag: 7974996036106101131 source: Fabric] Jan 30 13:49:54.935964 waagent[1864]: 2025-01-30T13:49:54.935918Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:49:54.936548 waagent[1864]: 2025-01-30T13:49:54.936493Z INFO ExtHandler Jan 30 13:49:54.936624 waagent[1864]: 2025-01-30T13:49:54.936578Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:49:54.939904 waagent[1864]: 2025-01-30T13:49:54.939867Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:49:55.016633 waagent[1864]: 2025-01-30T13:49:55.016478Z INFO ExtHandler Downloaded certificate {'thumbprint': '5239694AFF919547798FF6478209D6DE06CDD8B0', 'hasPrivateKey': True} Jan 30 13:49:55.017036 waagent[1864]: 2025-01-30T13:49:55.016972Z INFO ExtHandler Downloaded certificate {'thumbprint': '09C2254D88A61BEE815FA333582E5D8371E2C5DF', 'hasPrivateKey': False} Jan 30 13:49:55.017496 waagent[1864]: 2025-01-30T13:49:55.017444Z INFO ExtHandler Fetch goal state completed Jan 30 13:49:55.033039 waagent[1864]: 2025-01-30T13:49:55.032957Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1864 Jan 30 13:49:55.033220 waagent[1864]: 2025-01-30T13:49:55.033165Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:49:55.034798 waagent[1864]: 2025-01-30T13:49:55.034742Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:49:55.035193 waagent[1864]: 2025-01-30T13:49:55.035143Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:49:55.068775 waagent[1864]: 2025-01-30T13:49:55.068721Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:49:55.069045 waagent[1864]: 2025-01-30T13:49:55.068975Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:49:55.075766 waagent[1864]: 2025-01-30T13:49:55.075724Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:49:55.082857 systemd[1]: Reloading requested from client PID 1879 ('systemctl') (unit waagent.service)... Jan 30 13:49:55.082874 systemd[1]: Reloading... Jan 30 13:49:55.178079 zram_generator::config[1917]: No configuration found. Jan 30 13:49:55.289352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:55.369700 systemd[1]: Reloading finished in 286 ms. Jan 30 13:49:55.404033 waagent[1864]: 2025-01-30T13:49:55.399108Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:49:55.407250 systemd[1]: Reloading requested from client PID 1970 ('systemctl') (unit waagent.service)... Jan 30 13:49:55.407267 systemd[1]: Reloading... Jan 30 13:49:55.501139 zram_generator::config[2010]: No configuration found. Jan 30 13:49:55.614122 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:55.695789 systemd[1]: Reloading finished in 288 ms. Jan 30 13:49:55.724041 waagent[1864]: 2025-01-30T13:49:55.722380Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:49:55.724041 waagent[1864]: 2025-01-30T13:49:55.722623Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:49:56.768746 waagent[1864]: 2025-01-30T13:49:56.768628Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:49:56.769595 waagent[1864]: 2025-01-30T13:49:56.769523Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:49:56.770536 waagent[1864]: 2025-01-30T13:49:56.770468Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:49:56.771177 waagent[1864]: 2025-01-30T13:49:56.771116Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:49:56.771253 waagent[1864]: 2025-01-30T13:49:56.771176Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:49:56.771551 waagent[1864]: 2025-01-30T13:49:56.771490Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:49:56.771700 waagent[1864]: 2025-01-30T13:49:56.771635Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:49:56.772259 waagent[1864]: 2025-01-30T13:49:56.772195Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:49:56.772415 waagent[1864]: 2025-01-30T13:49:56.772354Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:49:56.772554 waagent[1864]: 2025-01-30T13:49:56.772508Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:49:56.772821 waagent[1864]: 2025-01-30T13:49:56.772754Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:49:56.772964 waagent[1864]: 2025-01-30T13:49:56.772908Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:49:56.773111 waagent[1864]: 2025-01-30T13:49:56.772992Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:49:56.773299 waagent[1864]: 2025-01-30T13:49:56.773242Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:49:56.773630 waagent[1864]: 2025-01-30T13:49:56.773574Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:49:56.773772 waagent[1864]: 2025-01-30T13:49:56.773708Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:49:56.774451 waagent[1864]: 2025-01-30T13:49:56.774397Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:49:56.776496 waagent[1864]: 2025-01-30T13:49:56.776441Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:49:56.776496 waagent[1864]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:49:56.776496 waagent[1864]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:49:56.776496 waagent[1864]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:49:56.776496 waagent[1864]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:49:56.776496 waagent[1864]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:49:56.776496 waagent[1864]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:49:56.784151 waagent[1864]: 2025-01-30T13:49:56.783963Z INFO ExtHandler ExtHandler Jan 30 13:49:56.785292 waagent[1864]: 2025-01-30T13:49:56.785236Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: d6d3ba8c-31dc-4cd2-890c-2f1e3e231d4c correlation 46e9e835-dd2f-44a5-bc64-7e18f6d0a2ac created: 2025-01-30T13:48:44.602097Z] Jan 30 13:49:56.785786 waagent[1864]: 2025-01-30T13:49:56.785730Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:49:56.786853 waagent[1864]: 2025-01-30T13:49:56.786538Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 30 13:49:56.826301 waagent[1864]: 2025-01-30T13:49:56.826050Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CCE88F00-C32D-4BA9-A56C-4096384DC5C5;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:49:56.861413 waagent[1864]: 2025-01-30T13:49:56.861314Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:49:56.861413 waagent[1864]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:49:56.861413 waagent[1864]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:49:56.861413 waagent[1864]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:20:da brd ff:ff:ff:ff:ff:ff Jan 30 13:49:56.861413 waagent[1864]: 3: enP15992s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b6:20:da brd ff:ff:ff:ff:ff:ff\ altname enP15992p0s2 Jan 30 13:49:56.861413 waagent[1864]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:49:56.861413 waagent[1864]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:49:56.861413 waagent[1864]: 2: eth0 inet 10.200.8.14/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:49:56.861413 waagent[1864]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:49:56.861413 waagent[1864]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:49:56.861413 waagent[1864]: 2: eth0 inet6 fe80::20d:3aff:feb6:20da/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:49:56.861413 waagent[1864]: 3: enP15992s1 inet6 fe80::20d:3aff:feb6:20da/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:49:56.907842 waagent[1864]: 2025-01-30T13:49:56.907766Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:49:56.907842 waagent[1864]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:49:56.907842 waagent[1864]: pkts bytes target prot opt in out source destination Jan 30 13:49:56.907842 waagent[1864]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:49:56.907842 waagent[1864]: pkts bytes target prot opt in out source destination Jan 30 13:49:56.907842 waagent[1864]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:49:56.907842 waagent[1864]: pkts bytes target prot opt in out source destination Jan 30 13:49:56.907842 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:49:56.907842 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:49:56.907842 waagent[1864]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:49:56.911261 waagent[1864]: 2025-01-30T13:49:56.911202Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:49:56.911261 waagent[1864]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:49:56.911261 waagent[1864]: pkts bytes target prot opt in out source destination Jan 30 13:49:56.911261 waagent[1864]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:49:56.911261 waagent[1864]: pkts bytes target prot opt in out source destination Jan 30 13:49:56.911261 waagent[1864]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:49:56.911261 waagent[1864]: pkts bytes target prot opt in out source destination Jan 30 13:49:56.911261 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:49:56.911261 waagent[1864]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:49:56.911261 waagent[1864]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:49:56.911652 waagent[1864]: 2025-01-30T13:49:56.911512Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:50:02.626831 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:50:02.634258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:02.732844 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:02.737939 (kubelet)[2100]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:50:03.390797 kubelet[2100]: E0130 13:50:03.390717 2100 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:50:03.394547 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:50:03.394773 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:50:13.645480 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:50:13.658264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:13.707065 chronyd[1648]: Selected source PHC0 Jan 30 13:50:13.751683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:13.756714 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:50:14.331191 kubelet[2116]: E0130 13:50:14.331076 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:50:14.333735 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:50:14.333936 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:50:22.166411 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:50:22.167821 systemd[1]: Started sshd@0-10.200.8.14:22-10.200.16.10:50392.service - OpenSSH per-connection server daemon (10.200.16.10:50392). Jan 30 13:50:22.884246 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 50392 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:22.886095 sshd[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:22.891680 systemd-logind[1646]: New session 3 of user core. Jan 30 13:50:22.901171 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:50:23.485327 systemd[1]: Started sshd@1-10.200.8.14:22-10.200.16.10:50404.service - OpenSSH per-connection server daemon (10.200.16.10:50404). Jan 30 13:50:24.165924 sshd[2128]: Accepted publickey for core from 10.200.16.10 port 50404 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:24.167998 sshd[2128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:24.172169 systemd-logind[1646]: New session 4 of user core. Jan 30 13:50:24.182183 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:50:24.542585 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:50:24.549258 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:24.644751 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:24.649607 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:50:24.656272 sshd[2128]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:24.660300 systemd-logind[1646]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:50:24.661144 systemd[1]: sshd@1-10.200.8.14:22-10.200.16.10:50404.service: Deactivated successfully. Jan 30 13:50:24.663835 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:50:24.666704 systemd-logind[1646]: Removed session 4. Jan 30 13:50:24.776127 systemd[1]: Started sshd@2-10.200.8.14:22-10.200.16.10:50414.service - OpenSSH per-connection server daemon (10.200.16.10:50414). Jan 30 13:50:25.175694 kubelet[2140]: E0130 13:50:25.175631 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:50:25.178057 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:50:25.178278 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:50:25.449937 sshd[2149]: Accepted publickey for core from 10.200.16.10 port 50414 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:25.451791 sshd[2149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:25.457651 systemd-logind[1646]: New session 5 of user core. Jan 30 13:50:25.463158 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:50:25.928956 sshd[2149]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:25.933239 systemd[1]: sshd@2-10.200.8.14:22-10.200.16.10:50414.service: Deactivated successfully. Jan 30 13:50:25.935103 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:50:25.935796 systemd-logind[1646]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:50:25.936682 systemd-logind[1646]: Removed session 5. Jan 30 13:50:26.047139 systemd[1]: Started sshd@3-10.200.8.14:22-10.200.16.10:51050.service - OpenSSH per-connection server daemon (10.200.16.10:51050). Jan 30 13:50:26.718675 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 51050 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:26.720379 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:26.724620 systemd-logind[1646]: New session 6 of user core. Jan 30 13:50:26.732157 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:50:27.197989 sshd[2157]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:27.201222 systemd[1]: sshd@3-10.200.8.14:22-10.200.16.10:51050.service: Deactivated successfully. Jan 30 13:50:27.203405 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:50:27.204864 systemd-logind[1646]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:50:27.205910 systemd-logind[1646]: Removed session 6. Jan 30 13:50:27.316087 systemd[1]: Started sshd@4-10.200.8.14:22-10.200.16.10:51054.service - OpenSSH per-connection server daemon (10.200.16.10:51054). Jan 30 13:50:27.989811 sshd[2164]: Accepted publickey for core from 10.200.16.10 port 51054 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:27.991644 sshd[2164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:27.997261 systemd-logind[1646]: New session 7 of user core. Jan 30 13:50:28.008166 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:50:28.490495 sudo[2167]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:50:28.490862 sudo[2167]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:50:28.518855 sudo[2167]: pam_unix(sudo:session): session closed for user root Jan 30 13:50:28.631998 sshd[2164]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:28.637163 systemd[1]: sshd@4-10.200.8.14:22-10.200.16.10:51054.service: Deactivated successfully. Jan 30 13:50:28.639394 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:50:28.640370 systemd-logind[1646]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:50:28.641513 systemd-logind[1646]: Removed session 7. Jan 30 13:50:28.754430 systemd[1]: Started sshd@5-10.200.8.14:22-10.200.16.10:51056.service - OpenSSH per-connection server daemon (10.200.16.10:51056). Jan 30 13:50:29.432783 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 51056 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:29.435619 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:29.441012 systemd-logind[1646]: New session 8 of user core. Jan 30 13:50:29.450185 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:50:29.806819 sudo[2176]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:50:29.807634 sudo[2176]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:50:29.811071 sudo[2176]: pam_unix(sudo:session): session closed for user root Jan 30 13:50:29.816197 sudo[2175]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:50:29.816542 sudo[2175]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:50:29.830362 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:50:29.832236 auditctl[2179]: No rules Jan 30 13:50:29.832605 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:50:29.832809 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:50:29.835572 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:50:29.872836 augenrules[2197]: No rules Jan 30 13:50:29.874339 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:50:29.876167 sudo[2175]: pam_unix(sudo:session): session closed for user root Jan 30 13:50:29.986250 sshd[2172]: pam_unix(sshd:session): session closed for user core Jan 30 13:50:29.989793 systemd[1]: sshd@5-10.200.8.14:22-10.200.16.10:51056.service: Deactivated successfully. Jan 30 13:50:29.992150 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:50:29.993739 systemd-logind[1646]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:50:29.994693 systemd-logind[1646]: Removed session 8. Jan 30 13:50:30.109053 systemd[1]: Started sshd@6-10.200.8.14:22-10.200.16.10:51066.service - OpenSSH per-connection server daemon (10.200.16.10:51066). Jan 30 13:50:30.806276 sshd[2205]: Accepted publickey for core from 10.200.16.10 port 51066 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:50:30.808123 sshd[2205]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:50:30.813174 systemd-logind[1646]: New session 9 of user core. Jan 30 13:50:30.819167 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:50:31.181807 sudo[2208]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:50:31.182204 sudo[2208]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:50:32.449354 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:50:32.449427 (dockerd)[2224]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:50:33.248499 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 30 13:50:33.933942 dockerd[2224]: time="2025-01-30T13:50:33.933871284Z" level=info msg="Starting up" Jan 30 13:50:34.314057 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2428829545-merged.mount: Deactivated successfully. Jan 30 13:50:34.381214 systemd[1]: var-lib-docker-metacopy\x2dcheck237401574-merged.mount: Deactivated successfully. Jan 30 13:50:34.409472 dockerd[2224]: time="2025-01-30T13:50:34.409415921Z" level=info msg="Loading containers: start." Jan 30 13:50:34.607295 kernel: Initializing XFRM netlink socket Jan 30 13:50:34.752054 systemd-networkd[1436]: docker0: Link UP Jan 30 13:50:34.779299 dockerd[2224]: time="2025-01-30T13:50:34.779250372Z" level=info msg="Loading containers: done." Jan 30 13:50:34.840363 dockerd[2224]: time="2025-01-30T13:50:34.840282788Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:50:34.840550 dockerd[2224]: time="2025-01-30T13:50:34.840441590Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:50:34.840594 dockerd[2224]: time="2025-01-30T13:50:34.840579492Z" level=info msg="Daemon has completed initialization" Jan 30 13:50:34.894280 dockerd[2224]: time="2025-01-30T13:50:34.893730990Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:50:34.894149 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:50:35.333535 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:50:35.342252 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:35.345687 update_engine[1649]: I20250130 13:50:35.345048 1649 update_attempter.cc:509] Updating boot flags... Jan 30 13:50:35.434033 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2373) Jan 30 13:50:35.536611 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:35.541205 (kubelet)[2393]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:50:35.577879 kubelet[2393]: E0130 13:50:35.577817 2393 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:50:35.580402 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:50:35.580607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:50:36.088916 containerd[1673]: time="2025-01-30T13:50:36.088558392Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\"" Jan 30 13:50:36.711067 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3617575979.mount: Deactivated successfully. Jan 30 13:50:38.331498 containerd[1673]: time="2025-01-30T13:50:38.331429331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:38.334088 containerd[1673]: time="2025-01-30T13:50:38.334025696Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.5: active requests=0, bytes read=27976729" Jan 30 13:50:38.338933 containerd[1673]: time="2025-01-30T13:50:38.338873818Z" level=info msg="ImageCreate event name:\"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:38.343578 containerd[1673]: time="2025-01-30T13:50:38.343543035Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:38.344854 containerd[1673]: time="2025-01-30T13:50:38.344619862Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.5\" with image id \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:fc4b366c0036b90d147f3b58244cf7d5f1f42b0db539f0fe83a8fc6e25a434ab\", size \"27973521\" in 2.256009069s" Jan 30 13:50:38.344854 containerd[1673]: time="2025-01-30T13:50:38.344669264Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.5\" returns image reference \"sha256:2212e74642e45d72a36f297bea139f607ce4ccc4792966a8e9c4d30e04a4a6fb\"" Jan 30 13:50:38.346480 containerd[1673]: time="2025-01-30T13:50:38.346449408Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\"" Jan 30 13:50:40.043468 containerd[1673]: time="2025-01-30T13:50:40.043408598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:40.046179 containerd[1673]: time="2025-01-30T13:50:40.046126066Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.5: active requests=0, bytes read=24701151" Jan 30 13:50:40.050441 containerd[1673]: time="2025-01-30T13:50:40.050404973Z" level=info msg="ImageCreate event name:\"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:40.056203 containerd[1673]: time="2025-01-30T13:50:40.056146217Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:40.057173 containerd[1673]: time="2025-01-30T13:50:40.057140142Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.5\" with image id \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:848cf42bf6c3c5ccac232b76c901c309edb3ebeac4d856885af0fc718798207e\", size \"26147725\" in 1.710650133s" Jan 30 13:50:40.057393 containerd[1673]: time="2025-01-30T13:50:40.057284646Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.5\" returns image reference \"sha256:d7fccb640e0edce9c47bd71f2b2ce328b824bea199bfe5838dda3fe2af6372f2\"" Jan 30 13:50:40.058120 containerd[1673]: time="2025-01-30T13:50:40.057892161Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\"" Jan 30 13:50:41.635412 containerd[1673]: time="2025-01-30T13:50:41.635348751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:41.637235 containerd[1673]: time="2025-01-30T13:50:41.637177497Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.5: active requests=0, bytes read=18652061" Jan 30 13:50:41.642272 containerd[1673]: time="2025-01-30T13:50:41.642215824Z" level=info msg="ImageCreate event name:\"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:41.648045 containerd[1673]: time="2025-01-30T13:50:41.647973368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:41.649077 containerd[1673]: time="2025-01-30T13:50:41.649042095Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.5\" with image id \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:0e01fd956ba32a7fa08f6b6da24e8c49015905c8e2cf752978d495e44cd4a8a9\", size \"20098653\" in 1.591112233s" Jan 30 13:50:41.649306 containerd[1673]: time="2025-01-30T13:50:41.649185098Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.5\" returns image reference \"sha256:4b2fb209f5d1efc0fc980c5acda28886e4eb6ab4820173976bdd441cbd2ee09a\"" Jan 30 13:50:41.650084 containerd[1673]: time="2025-01-30T13:50:41.650058320Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 30 13:50:42.829452 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1475226325.mount: Deactivated successfully. Jan 30 13:50:43.326270 containerd[1673]: time="2025-01-30T13:50:43.326205187Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:43.328958 containerd[1673]: time="2025-01-30T13:50:43.328894655Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231136" Jan 30 13:50:43.331607 containerd[1673]: time="2025-01-30T13:50:43.331546921Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:43.335510 containerd[1673]: time="2025-01-30T13:50:43.335448119Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:43.336206 containerd[1673]: time="2025-01-30T13:50:43.336031534Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.685922613s" Jan 30 13:50:43.336206 containerd[1673]: time="2025-01-30T13:50:43.336076435Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 30 13:50:43.336801 containerd[1673]: time="2025-01-30T13:50:43.336747452Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:50:43.900489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1604200142.mount: Deactivated successfully. Jan 30 13:50:45.155394 containerd[1673]: time="2025-01-30T13:50:45.155328994Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:45.158163 containerd[1673]: time="2025-01-30T13:50:45.158105363Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 13:50:45.161055 containerd[1673]: time="2025-01-30T13:50:45.160984936Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:45.165506 containerd[1673]: time="2025-01-30T13:50:45.165450348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:45.166638 containerd[1673]: time="2025-01-30T13:50:45.166479173Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.82969192s" Jan 30 13:50:45.166638 containerd[1673]: time="2025-01-30T13:50:45.166524875Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:50:45.167469 containerd[1673]: time="2025-01-30T13:50:45.167423797Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 30 13:50:45.583392 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:50:45.590286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:45.687544 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:45.698376 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:50:46.292920 kubelet[2534]: E0130 13:50:46.292880 2534 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:50:46.295444 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:50:46.295712 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:50:46.318841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751436738.mount: Deactivated successfully. Jan 30 13:50:46.340587 containerd[1673]: time="2025-01-30T13:50:46.340534384Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:46.342411 containerd[1673]: time="2025-01-30T13:50:46.342350528Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" Jan 30 13:50:46.345635 containerd[1673]: time="2025-01-30T13:50:46.345584606Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:46.349684 containerd[1673]: time="2025-01-30T13:50:46.349626904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:46.350529 containerd[1673]: time="2025-01-30T13:50:46.350387422Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.182887423s" Jan 30 13:50:46.350529 containerd[1673]: time="2025-01-30T13:50:46.350425423Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 30 13:50:46.351390 containerd[1673]: time="2025-01-30T13:50:46.351312344Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jan 30 13:50:47.577510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1165118923.mount: Deactivated successfully. Jan 30 13:50:49.828617 containerd[1673]: time="2025-01-30T13:50:49.828551018Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:49.831500 containerd[1673]: time="2025-01-30T13:50:49.831435888Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56779981" Jan 30 13:50:49.848529 containerd[1673]: time="2025-01-30T13:50:49.848447398Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:49.854823 containerd[1673]: time="2025-01-30T13:50:49.854756351Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:49.860776 containerd[1673]: time="2025-01-30T13:50:49.858620844Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.507263899s" Jan 30 13:50:49.860776 containerd[1673]: time="2025-01-30T13:50:49.858672245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jan 30 13:50:53.600178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:53.607291 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:53.633706 systemd[1]: Reloading requested from client PID 2626 ('systemctl') (unit session-9.scope)... Jan 30 13:50:53.633721 systemd[1]: Reloading... Jan 30 13:50:53.740064 zram_generator::config[2666]: No configuration found. Jan 30 13:50:53.859703 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:50:53.940722 systemd[1]: Reloading finished in 306 ms. Jan 30 13:50:53.988476 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:50:53.988582 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:50:53.988862 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:53.995367 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:50:55.699759 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:50:55.706898 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:50:55.744990 kubelet[2733]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:50:55.745431 kubelet[2733]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:50:55.745431 kubelet[2733]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:50:55.745431 kubelet[2733]: I0130 13:50:55.745281 2733 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:50:56.218828 kubelet[2733]: I0130 13:50:56.218771 2733 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:50:56.218828 kubelet[2733]: I0130 13:50:56.218814 2733 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:50:56.219313 kubelet[2733]: I0130 13:50:56.219283 2733 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:50:56.243530 kubelet[2733]: I0130 13:50:56.243333 2733 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:50:56.244800 kubelet[2733]: E0130 13:50:56.244490 2733 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:56.253627 kubelet[2733]: E0130 13:50:56.253588 2733 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:50:56.253627 kubelet[2733]: I0130 13:50:56.253619 2733 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:50:56.259493 kubelet[2733]: I0130 13:50:56.259466 2733 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:50:56.259726 kubelet[2733]: I0130 13:50:56.259599 2733 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:50:56.259788 kubelet[2733]: I0130 13:50:56.259754 2733 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:50:56.260024 kubelet[2733]: I0130 13:50:56.259792 2733 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-38674a3e2a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:50:56.260024 kubelet[2733]: I0130 13:50:56.260024 2733 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:50:56.260245 kubelet[2733]: I0130 13:50:56.260039 2733 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:50:56.260245 kubelet[2733]: I0130 13:50:56.260178 2733 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:50:56.262579 kubelet[2733]: I0130 13:50:56.262554 2733 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:50:56.262679 kubelet[2733]: I0130 13:50:56.262585 2733 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:50:56.262679 kubelet[2733]: I0130 13:50:56.262631 2733 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:50:56.262679 kubelet[2733]: I0130 13:50:56.262650 2733 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:50:56.267967 kubelet[2733]: I0130 13:50:56.267847 2733 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:50:56.270266 kubelet[2733]: I0130 13:50:56.270233 2733 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:50:56.271450 kubelet[2733]: W0130 13:50:56.271409 2733 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:50:56.272746 kubelet[2733]: I0130 13:50:56.272054 2733 server.go:1269] "Started kubelet" Jan 30 13:50:56.272746 kubelet[2733]: W0130 13:50:56.272250 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-38674a3e2a&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:56.272746 kubelet[2733]: E0130 13:50:56.272320 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-38674a3e2a&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:56.276331 kubelet[2733]: W0130 13:50:56.276112 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:56.276331 kubelet[2733]: E0130 13:50:56.276181 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:56.276331 kubelet[2733]: I0130 13:50:56.276277 2733 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:50:56.279065 kubelet[2733]: I0130 13:50:56.278833 2733 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:50:56.279313 kubelet[2733]: I0130 13:50:56.279262 2733 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:50:56.279705 kubelet[2733]: I0130 13:50:56.279686 2733 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:50:56.282824 kubelet[2733]: E0130 13:50:56.280134 2733 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.14:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.14:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-38674a3e2a.181f7cab47abb2fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-38674a3e2a,UID:ci-4081.3.0-a-38674a3e2a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-38674a3e2a,},FirstTimestamp:2025-01-30 13:50:56.271987454 +0000 UTC m=+0.561574999,LastTimestamp:2025-01-30 13:50:56.271987454 +0000 UTC m=+0.561574999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-38674a3e2a,}" Jan 30 13:50:56.285278 kubelet[2733]: I0130 13:50:56.284407 2733 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:50:56.286044 kubelet[2733]: I0130 13:50:56.285988 2733 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:50:56.286882 kubelet[2733]: I0130 13:50:56.286862 2733 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:50:56.287169 kubelet[2733]: E0130 13:50:56.287145 2733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-38674a3e2a\" not found" Jan 30 13:50:56.289489 kubelet[2733]: E0130 13:50:56.289443 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-38674a3e2a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="200ms" Jan 30 13:50:56.290075 kubelet[2733]: I0130 13:50:56.290054 2733 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:50:56.290256 kubelet[2733]: I0130 13:50:56.290235 2733 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:50:56.290944 kubelet[2733]: I0130 13:50:56.290922 2733 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:50:56.291021 kubelet[2733]: I0130 13:50:56.290984 2733 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:50:56.292540 kubelet[2733]: E0130 13:50:56.292524 2733 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:50:56.293326 kubelet[2733]: I0130 13:50:56.293307 2733 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:50:56.298672 kubelet[2733]: W0130 13:50:56.298617 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:56.298865 kubelet[2733]: E0130 13:50:56.298796 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:56.322143 kubelet[2733]: I0130 13:50:56.321986 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:50:56.323714 kubelet[2733]: I0130 13:50:56.323371 2733 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:50:56.323714 kubelet[2733]: I0130 13:50:56.323405 2733 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:50:56.323714 kubelet[2733]: I0130 13:50:56.323424 2733 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:50:56.323714 kubelet[2733]: E0130 13:50:56.323461 2733 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:50:56.328715 kubelet[2733]: W0130 13:50:56.328650 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:56.329338 kubelet[2733]: E0130 13:50:56.329310 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:56.386622 kubelet[2733]: I0130 13:50:56.386584 2733 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:50:56.386622 kubelet[2733]: I0130 13:50:56.386617 2733 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:50:56.386845 kubelet[2733]: I0130 13:50:56.386642 2733 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:50:56.388158 kubelet[2733]: E0130 13:50:56.388128 2733 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-38674a3e2a\" not found" Jan 30 13:50:56.392196 kubelet[2733]: I0130 13:50:56.392173 2733 policy_none.go:49] "None policy: Start" Jan 30 13:50:56.392867 kubelet[2733]: I0130 13:50:56.392835 2733 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:50:56.392867 kubelet[2733]: I0130 13:50:56.392861 2733 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:50:56.401530 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:50:56.411054 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:50:56.414633 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:50:56.422436 kubelet[2733]: I0130 13:50:56.421736 2733 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:50:56.422436 kubelet[2733]: I0130 13:50:56.421971 2733 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:50:56.422436 kubelet[2733]: I0130 13:50:56.421989 2733 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:50:56.422436 kubelet[2733]: I0130 13:50:56.422266 2733 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:50:56.424567 kubelet[2733]: E0130 13:50:56.424544 2733 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-38674a3e2a\" not found" Jan 30 13:50:56.435103 systemd[1]: Created slice kubepods-burstable-podc44bf465ec86aaefd6c1facc996f7bc2.slice - libcontainer container kubepods-burstable-podc44bf465ec86aaefd6c1facc996f7bc2.slice. Jan 30 13:50:56.446682 systemd[1]: Created slice kubepods-burstable-pod2465438ae04a07508ff23ceeaa4cf770.slice - libcontainer container kubepods-burstable-pod2465438ae04a07508ff23ceeaa4cf770.slice. Jan 30 13:50:56.462046 systemd[1]: Created slice kubepods-burstable-pod5bf314aa7003aff32e3b3b6d29af0d9c.slice - libcontainer container kubepods-burstable-pod5bf314aa7003aff32e3b3b6d29af0d9c.slice. Jan 30 13:50:56.490712 kubelet[2733]: E0130 13:50:56.490569 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-38674a3e2a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="400ms" Jan 30 13:50:56.524049 kubelet[2733]: I0130 13:50:56.524014 2733 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.524460 kubelet[2733]: E0130 13:50:56.524429 2733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.591905 kubelet[2733]: I0130 13:50:56.591855 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c44bf465ec86aaefd6c1facc996f7bc2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" (UID: \"c44bf465ec86aaefd6c1facc996f7bc2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592048 kubelet[2733]: I0130 13:50:56.591912 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592048 kubelet[2733]: I0130 13:50:56.591967 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bf314aa7003aff32e3b3b6d29af0d9c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-38674a3e2a\" (UID: \"5bf314aa7003aff32e3b3b6d29af0d9c\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592048 kubelet[2733]: I0130 13:50:56.592014 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c44bf465ec86aaefd6c1facc996f7bc2-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" (UID: \"c44bf465ec86aaefd6c1facc996f7bc2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592048 kubelet[2733]: I0130 13:50:56.592044 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c44bf465ec86aaefd6c1facc996f7bc2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" (UID: \"c44bf465ec86aaefd6c1facc996f7bc2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592287 kubelet[2733]: I0130 13:50:56.592070 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592287 kubelet[2733]: I0130 13:50:56.592097 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592287 kubelet[2733]: I0130 13:50:56.592123 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.592287 kubelet[2733]: I0130 13:50:56.592152 2733 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.727589 kubelet[2733]: I0130 13:50:56.727553 2733 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.728048 kubelet[2733]: E0130 13:50:56.727992 2733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:56.746297 containerd[1673]: time="2025-01-30T13:50:56.746162811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-38674a3e2a,Uid:c44bf465ec86aaefd6c1facc996f7bc2,Namespace:kube-system,Attempt:0,}" Jan 30 13:50:56.760895 containerd[1673]: time="2025-01-30T13:50:56.760856388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-38674a3e2a,Uid:2465438ae04a07508ff23ceeaa4cf770,Namespace:kube-system,Attempt:0,}" Jan 30 13:50:56.765594 containerd[1673]: time="2025-01-30T13:50:56.765554708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-38674a3e2a,Uid:5bf314aa7003aff32e3b3b6d29af0d9c,Namespace:kube-system,Attempt:0,}" Jan 30 13:50:56.891871 kubelet[2733]: E0130 13:50:56.891799 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-38674a3e2a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="800ms" Jan 30 13:50:57.130250 kubelet[2733]: I0130 13:50:57.130191 2733 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:57.135346 kubelet[2733]: E0130 13:50:57.130583 2733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:57.336457 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3056036818.mount: Deactivated successfully. Jan 30 13:50:57.377189 containerd[1673]: time="2025-01-30T13:50:57.377129488Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:50:57.379639 containerd[1673]: time="2025-01-30T13:50:57.379582051Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 13:50:57.382585 containerd[1673]: time="2025-01-30T13:50:57.382162217Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:50:57.384844 containerd[1673]: time="2025-01-30T13:50:57.384778884Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:50:57.387597 containerd[1673]: time="2025-01-30T13:50:57.387555755Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:50:57.389767 containerd[1673]: time="2025-01-30T13:50:57.389722211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:50:57.391883 containerd[1673]: time="2025-01-30T13:50:57.391803964Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:50:57.396088 containerd[1673]: time="2025-01-30T13:50:57.396055273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:50:57.396839 containerd[1673]: time="2025-01-30T13:50:57.396803192Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 631.177282ms" Jan 30 13:50:57.398960 containerd[1673]: time="2025-01-30T13:50:57.398924547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 637.986456ms" Jan 30 13:50:57.399629 containerd[1673]: time="2025-01-30T13:50:57.399595664Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 653.338951ms" Jan 30 13:50:57.522591 kubelet[2733]: W0130 13:50:57.522540 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:57.522745 kubelet[2733]: E0130 13:50:57.522609 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.8.14:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:57.575596 kubelet[2733]: W0130 13:50:57.575517 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-38674a3e2a&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:57.575596 kubelet[2733]: E0130 13:50:57.575609 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.8.14:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-38674a3e2a&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:57.618919 kubelet[2733]: W0130 13:50:57.618865 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:57.618919 kubelet[2733]: E0130 13:50:57.618926 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.8.14:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:57.681971 kubelet[2733]: W0130 13:50:57.681799 2733 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.8.14:6443: connect: connection refused Jan 30 13:50:57.681971 kubelet[2733]: E0130 13:50:57.681879 2733 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.8.14:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:57.692762 kubelet[2733]: E0130 13:50:57.692693 2733 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-38674a3e2a?timeout=10s\": dial tcp 10.200.8.14:6443: connect: connection refused" interval="1.6s" Jan 30 13:50:57.934620 kubelet[2733]: I0130 13:50:57.933987 2733 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:57.934620 kubelet[2733]: E0130 13:50:57.934484 2733 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.8.14:6443/api/v1/nodes\": dial tcp 10.200.8.14:6443: connect: connection refused" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:50:58.105045 containerd[1673]: time="2025-01-30T13:50:58.103370207Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:58.105045 containerd[1673]: time="2025-01-30T13:50:58.103430309Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:58.105045 containerd[1673]: time="2025-01-30T13:50:58.103444909Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:58.105045 containerd[1673]: time="2025-01-30T13:50:58.103568612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:58.107930 containerd[1673]: time="2025-01-30T13:50:58.107606416Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:58.107930 containerd[1673]: time="2025-01-30T13:50:58.107666618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:58.107930 containerd[1673]: time="2025-01-30T13:50:58.107695418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:58.107930 containerd[1673]: time="2025-01-30T13:50:58.107804721Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:58.116769 containerd[1673]: time="2025-01-30T13:50:58.116694649Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:50:58.117426 containerd[1673]: time="2025-01-30T13:50:58.117093059Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:50:58.118139 containerd[1673]: time="2025-01-30T13:50:58.118093285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:58.118438 containerd[1673]: time="2025-01-30T13:50:58.118402993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:50:58.139601 systemd[1]: Started cri-containerd-c67aa1f7fea525297c80ff59b79de854ca0a13413e16d4c6bebc0dfa507b68c6.scope - libcontainer container c67aa1f7fea525297c80ff59b79de854ca0a13413e16d4c6bebc0dfa507b68c6. Jan 30 13:50:58.152464 systemd[1]: Started cri-containerd-3fea974735e8c81d4f3841474a758c05c7ed30614e17ab2e3c14dacd108af24b.scope - libcontainer container 3fea974735e8c81d4f3841474a758c05c7ed30614e17ab2e3c14dacd108af24b. Jan 30 13:50:58.175338 systemd[1]: Started cri-containerd-ddd5345331936189227751dd17364504ef703a1ec6a2e3105a92cc7966dee6ba.scope - libcontainer container ddd5345331936189227751dd17364504ef703a1ec6a2e3105a92cc7966dee6ba. Jan 30 13:50:58.238183 containerd[1673]: time="2025-01-30T13:50:58.237421144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-38674a3e2a,Uid:2465438ae04a07508ff23ceeaa4cf770,Namespace:kube-system,Attempt:0,} returns sandbox id \"c67aa1f7fea525297c80ff59b79de854ca0a13413e16d4c6bebc0dfa507b68c6\"" Jan 30 13:50:58.247791 containerd[1673]: time="2025-01-30T13:50:58.247607105Z" level=info msg="CreateContainer within sandbox \"c67aa1f7fea525297c80ff59b79de854ca0a13413e16d4c6bebc0dfa507b68c6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:50:58.256965 containerd[1673]: time="2025-01-30T13:50:58.256856342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-38674a3e2a,Uid:5bf314aa7003aff32e3b3b6d29af0d9c,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddd5345331936189227751dd17364504ef703a1ec6a2e3105a92cc7966dee6ba\"" Jan 30 13:50:58.262123 containerd[1673]: time="2025-01-30T13:50:58.261979574Z" level=info msg="CreateContainer within sandbox \"ddd5345331936189227751dd17364504ef703a1ec6a2e3105a92cc7966dee6ba\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:50:58.266595 containerd[1673]: time="2025-01-30T13:50:58.266561391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-38674a3e2a,Uid:c44bf465ec86aaefd6c1facc996f7bc2,Namespace:kube-system,Attempt:0,} returns sandbox id \"3fea974735e8c81d4f3841474a758c05c7ed30614e17ab2e3c14dacd108af24b\"" Jan 30 13:50:58.268952 containerd[1673]: time="2025-01-30T13:50:58.268918052Z" level=info msg="CreateContainer within sandbox \"3fea974735e8c81d4f3841474a758c05c7ed30614e17ab2e3c14dacd108af24b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:50:58.305491 containerd[1673]: time="2025-01-30T13:50:58.305443788Z" level=info msg="CreateContainer within sandbox \"c67aa1f7fea525297c80ff59b79de854ca0a13413e16d4c6bebc0dfa507b68c6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"08a83196b0b183b1b2fd36911b522f8b9e9213c83f2ac2a23a262cd7a673072f\"" Jan 30 13:50:58.306450 containerd[1673]: time="2025-01-30T13:50:58.306418113Z" level=info msg="StartContainer for \"08a83196b0b183b1b2fd36911b522f8b9e9213c83f2ac2a23a262cd7a673072f\"" Jan 30 13:50:58.331027 containerd[1673]: time="2025-01-30T13:50:58.328169471Z" level=info msg="CreateContainer within sandbox \"ddd5345331936189227751dd17364504ef703a1ec6a2e3105a92cc7966dee6ba\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43d492e57bba205cde2f30829d0501f340afd398321c66433855d57f1d42f914\"" Jan 30 13:50:58.331349 containerd[1673]: time="2025-01-30T13:50:58.331303651Z" level=info msg="StartContainer for \"43d492e57bba205cde2f30829d0501f340afd398321c66433855d57f1d42f914\"" Jan 30 13:50:58.337723 kubelet[2733]: E0130 13:50:58.337693 2733 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.8.14:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.8.14:6443: connect: connection refused" logger="UnhandledError" Jan 30 13:50:58.349567 containerd[1673]: time="2025-01-30T13:50:58.349527518Z" level=info msg="CreateContainer within sandbox \"3fea974735e8c81d4f3841474a758c05c7ed30614e17ab2e3c14dacd108af24b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"065b0285666f9248e8c997c4a3e036cd72df3e2e1aac915054a2cc476f365f68\"" Jan 30 13:50:58.351269 containerd[1673]: time="2025-01-30T13:50:58.351232262Z" level=info msg="StartContainer for \"065b0285666f9248e8c997c4a3e036cd72df3e2e1aac915054a2cc476f365f68\"" Jan 30 13:50:58.352397 systemd[1]: Started cri-containerd-08a83196b0b183b1b2fd36911b522f8b9e9213c83f2ac2a23a262cd7a673072f.scope - libcontainer container 08a83196b0b183b1b2fd36911b522f8b9e9213c83f2ac2a23a262cd7a673072f. Jan 30 13:50:58.406177 systemd[1]: Started cri-containerd-43d492e57bba205cde2f30829d0501f340afd398321c66433855d57f1d42f914.scope - libcontainer container 43d492e57bba205cde2f30829d0501f340afd398321c66433855d57f1d42f914. Jan 30 13:50:58.414434 systemd[1]: Started cri-containerd-065b0285666f9248e8c997c4a3e036cd72df3e2e1aac915054a2cc476f365f68.scope - libcontainer container 065b0285666f9248e8c997c4a3e036cd72df3e2e1aac915054a2cc476f365f68. Jan 30 13:50:58.457783 containerd[1673]: time="2025-01-30T13:50:58.457733393Z" level=info msg="StartContainer for \"08a83196b0b183b1b2fd36911b522f8b9e9213c83f2ac2a23a262cd7a673072f\" returns successfully" Jan 30 13:50:58.512819 containerd[1673]: time="2025-01-30T13:50:58.512455496Z" level=info msg="StartContainer for \"43d492e57bba205cde2f30829d0501f340afd398321c66433855d57f1d42f914\" returns successfully" Jan 30 13:50:58.590036 containerd[1673]: time="2025-01-30T13:50:58.589886481Z" level=info msg="StartContainer for \"065b0285666f9248e8c997c4a3e036cd72df3e2e1aac915054a2cc476f365f68\" returns successfully" Jan 30 13:50:59.537651 kubelet[2733]: I0130 13:50:59.537611 2733 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:00.661102 kubelet[2733]: E0130 13:51:00.661035 2733 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-38674a3e2a\" not found" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:00.726211 kubelet[2733]: E0130 13:51:00.726098 2733 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-38674a3e2a.181f7cab47abb2fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-38674a3e2a,UID:ci-4081.3.0-a-38674a3e2a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-38674a3e2a,},FirstTimestamp:2025-01-30 13:50:56.271987454 +0000 UTC m=+0.561574999,LastTimestamp:2025-01-30 13:50:56.271987454 +0000 UTC m=+0.561574999,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-38674a3e2a,}" Jan 30 13:51:00.802402 kubelet[2733]: I0130 13:51:00.802049 2733 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:00.802673 kubelet[2733]: E0130 13:51:00.802563 2733 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-38674a3e2a.181f7cab48e4dc3c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-38674a3e2a,UID:ci-4081.3.0-a-38674a3e2a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-38674a3e2a,},FirstTimestamp:2025-01-30 13:50:56.29251078 +0000 UTC m=+0.582098325,LastTimestamp:2025-01-30 13:50:56.29251078 +0000 UTC m=+0.582098325,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-38674a3e2a,}" Jan 30 13:51:00.863180 kubelet[2733]: E0130 13:51:00.863063 2733 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-38674a3e2a.181f7cab4e72ee32 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-38674a3e2a,UID:ci-4081.3.0-a-38674a3e2a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node ci-4081.3.0-a-38674a3e2a status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-38674a3e2a,},FirstTimestamp:2025-01-30 13:50:56.38570757 +0000 UTC m=+0.675295115,LastTimestamp:2025-01-30 13:50:56.38570757 +0000 UTC m=+0.675295115,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-38674a3e2a,}" Jan 30 13:51:00.920515 kubelet[2733]: E0130 13:51:00.920130 2733 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.3.0-a-38674a3e2a.181f7cab4e73467a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-38674a3e2a,UID:ci-4081.3.0-a-38674a3e2a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node ci-4081.3.0-a-38674a3e2a status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-38674a3e2a,},FirstTimestamp:2025-01-30 13:50:56.38573017 +0000 UTC m=+0.675317815,LastTimestamp:2025-01-30 13:50:56.38573017 +0000 UTC m=+0.675317815,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-38674a3e2a,}" Jan 30 13:51:01.278260 kubelet[2733]: I0130 13:51:01.277713 2733 apiserver.go:52] "Watching apiserver" Jan 30 13:51:01.291895 kubelet[2733]: I0130 13:51:01.291860 2733 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:51:01.386397 kubelet[2733]: E0130 13:51:01.386352 2733 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:02.722725 systemd[1]: Reloading requested from client PID 3006 ('systemctl') (unit session-9.scope)... Jan 30 13:51:02.722763 systemd[1]: Reloading... Jan 30 13:51:02.842040 zram_generator::config[3046]: No configuration found. Jan 30 13:51:02.967566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:51:03.065706 systemd[1]: Reloading finished in 342 ms. Jan 30 13:51:03.112451 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:03.135566 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:51:03.135842 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:03.142352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:51:03.249137 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:51:03.255315 (kubelet)[3113]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:51:03.307020 kubelet[3113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:03.307020 kubelet[3113]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:51:03.307020 kubelet[3113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:51:03.307866 kubelet[3113]: I0130 13:51:03.307700 3113 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:51:03.315961 kubelet[3113]: I0130 13:51:03.315853 3113 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 30 13:51:03.315961 kubelet[3113]: I0130 13:51:03.315898 3113 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:51:03.317439 kubelet[3113]: I0130 13:51:03.316592 3113 server.go:929] "Client rotation is on, will bootstrap in background" Jan 30 13:51:03.319455 kubelet[3113]: I0130 13:51:03.319431 3113 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:51:03.325060 kubelet[3113]: I0130 13:51:03.324762 3113 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:51:03.331103 kubelet[3113]: E0130 13:51:03.331042 3113 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 30 13:51:03.331103 kubelet[3113]: I0130 13:51:03.331079 3113 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 30 13:51:03.334851 kubelet[3113]: I0130 13:51:03.334821 3113 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:51:03.335061 kubelet[3113]: I0130 13:51:03.335042 3113 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 30 13:51:03.335406 kubelet[3113]: I0130 13:51:03.335362 3113 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:51:03.335718 kubelet[3113]: I0130 13:51:03.335400 3113 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-38674a3e2a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 30 13:51:03.335887 kubelet[3113]: I0130 13:51:03.335730 3113 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:51:03.335887 kubelet[3113]: I0130 13:51:03.335744 3113 container_manager_linux.go:300] "Creating device plugin manager" Jan 30 13:51:03.335887 kubelet[3113]: I0130 13:51:03.335804 3113 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:03.336030 kubelet[3113]: I0130 13:51:03.335979 3113 kubelet.go:408] "Attempting to sync node with API server" Jan 30 13:51:03.336291 kubelet[3113]: I0130 13:51:03.336095 3113 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:51:03.336291 kubelet[3113]: I0130 13:51:03.336136 3113 kubelet.go:314] "Adding apiserver pod source" Jan 30 13:51:03.337223 kubelet[3113]: I0130 13:51:03.337077 3113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:51:03.342019 kubelet[3113]: I0130 13:51:03.341410 3113 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:51:03.342019 kubelet[3113]: I0130 13:51:03.341934 3113 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:51:03.342584 kubelet[3113]: I0130 13:51:03.342567 3113 server.go:1269] "Started kubelet" Jan 30 13:51:03.346623 kubelet[3113]: I0130 13:51:03.346172 3113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:51:03.356091 kubelet[3113]: I0130 13:51:03.355950 3113 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:51:03.359249 kubelet[3113]: I0130 13:51:03.359230 3113 server.go:460] "Adding debug handlers to kubelet server" Jan 30 13:51:03.360401 kubelet[3113]: I0130 13:51:03.360347 3113 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:51:03.360675 kubelet[3113]: I0130 13:51:03.360663 3113 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:51:03.360972 kubelet[3113]: I0130 13:51:03.360957 3113 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 30 13:51:03.361460 kubelet[3113]: E0130 13:51:03.361438 3113 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-4081.3.0-a-38674a3e2a\" not found" Jan 30 13:51:03.362689 kubelet[3113]: I0130 13:51:03.362654 3113 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 30 13:51:03.363199 kubelet[3113]: I0130 13:51:03.363182 3113 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 30 13:51:03.371730 kubelet[3113]: I0130 13:51:03.371602 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:51:03.371730 kubelet[3113]: I0130 13:51:03.371642 3113 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:51:03.374781 kubelet[3113]: I0130 13:51:03.374494 3113 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:51:03.374781 kubelet[3113]: I0130 13:51:03.374537 3113 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:51:03.374781 kubelet[3113]: I0130 13:51:03.374560 3113 kubelet.go:2321] "Starting kubelet main sync loop" Jan 30 13:51:03.374781 kubelet[3113]: E0130 13:51:03.374607 3113 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:51:03.379125 kubelet[3113]: I0130 13:51:03.379093 3113 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:51:03.379299 kubelet[3113]: I0130 13:51:03.379232 3113 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:51:03.388036 kubelet[3113]: I0130 13:51:03.387967 3113 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:51:03.437615 kubelet[3113]: I0130 13:51:03.437582 3113 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:51:03.437615 kubelet[3113]: I0130 13:51:03.437600 3113 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:51:03.437615 kubelet[3113]: I0130 13:51:03.437623 3113 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:51:03.437860 kubelet[3113]: I0130 13:51:03.437798 3113 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:51:03.437860 kubelet[3113]: I0130 13:51:03.437811 3113 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:51:03.437860 kubelet[3113]: I0130 13:51:03.437834 3113 policy_none.go:49] "None policy: Start" Jan 30 13:51:03.438584 kubelet[3113]: I0130 13:51:03.438560 3113 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:51:03.438584 kubelet[3113]: I0130 13:51:03.438586 3113 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:51:03.438781 kubelet[3113]: I0130 13:51:03.438762 3113 state_mem.go:75] "Updated machine memory state" Jan 30 13:51:03.443483 kubelet[3113]: I0130 13:51:03.443457 3113 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:51:03.443845 kubelet[3113]: I0130 13:51:03.443674 3113 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 30 13:51:03.443845 kubelet[3113]: I0130 13:51:03.443691 3113 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:51:03.444797 kubelet[3113]: I0130 13:51:03.444771 3113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:51:03.483158 kubelet[3113]: W0130 13:51:03.483125 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:03.488267 kubelet[3113]: W0130 13:51:03.488237 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:03.488512 kubelet[3113]: W0130 13:51:03.488482 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:03.551717 kubelet[3113]: I0130 13:51:03.551684 3113 kubelet_node_status.go:72] "Attempting to register node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.561783 kubelet[3113]: I0130 13:51:03.561749 3113 kubelet_node_status.go:111] "Node was previously registered" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.561926 kubelet[3113]: I0130 13:51:03.561847 3113 kubelet_node_status.go:75] "Successfully registered node" node="ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.572797 kubelet[3113]: I0130 13:51:03.572165 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c44bf465ec86aaefd6c1facc996f7bc2-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" (UID: \"c44bf465ec86aaefd6c1facc996f7bc2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.572797 kubelet[3113]: I0130 13:51:03.572209 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c44bf465ec86aaefd6c1facc996f7bc2-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" (UID: \"c44bf465ec86aaefd6c1facc996f7bc2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.572797 kubelet[3113]: I0130 13:51:03.572236 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.572797 kubelet[3113]: I0130 13:51:03.572260 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.572797 kubelet[3113]: I0130 13:51:03.572286 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5bf314aa7003aff32e3b3b6d29af0d9c-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-38674a3e2a\" (UID: \"5bf314aa7003aff32e3b3b6d29af0d9c\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.573109 kubelet[3113]: I0130 13:51:03.572316 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c44bf465ec86aaefd6c1facc996f7bc2-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" (UID: \"c44bf465ec86aaefd6c1facc996f7bc2\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.573109 kubelet[3113]: I0130 13:51:03.572336 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.573109 kubelet[3113]: I0130 13:51:03.572363 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:03.573109 kubelet[3113]: I0130 13:51:03.572390 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2465438ae04a07508ff23ceeaa4cf770-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-38674a3e2a\" (UID: \"2465438ae04a07508ff23ceeaa4cf770\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:04.339115 kubelet[3113]: I0130 13:51:04.339054 3113 apiserver.go:52] "Watching apiserver" Jan 30 13:51:04.364378 kubelet[3113]: I0130 13:51:04.364311 3113 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 30 13:51:04.438283 kubelet[3113]: W0130 13:51:04.438221 3113 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:51:04.438804 kubelet[3113]: E0130 13:51:04.438569 3113 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-38674a3e2a\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" Jan 30 13:51:04.467781 kubelet[3113]: I0130 13:51:04.467293 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-38674a3e2a" podStartSLOduration=1.467268604 podStartE2EDuration="1.467268604s" podCreationTimestamp="2025-01-30 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:04.452869307 +0000 UTC m=+1.191806178" watchObservedRunningTime="2025-01-30 13:51:04.467268604 +0000 UTC m=+1.206205475" Jan 30 13:51:04.481281 kubelet[3113]: I0130 13:51:04.480945 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-38674a3e2a" podStartSLOduration=1.48089448 podStartE2EDuration="1.48089448s" podCreationTimestamp="2025-01-30 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:04.468621542 +0000 UTC m=+1.207558313" watchObservedRunningTime="2025-01-30 13:51:04.48089448 +0000 UTC m=+1.219831451" Jan 30 13:51:04.481281 kubelet[3113]: I0130 13:51:04.481124 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-38674a3e2a" podStartSLOduration=1.481115986 podStartE2EDuration="1.481115986s" podCreationTimestamp="2025-01-30 13:51:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:04.479130932 +0000 UTC m=+1.218067703" watchObservedRunningTime="2025-01-30 13:51:04.481115986 +0000 UTC m=+1.220052757" Jan 30 13:51:07.319876 kubelet[3113]: I0130 13:51:07.319815 3113 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:51:07.320724 kubelet[3113]: I0130 13:51:07.320541 3113 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:51:07.320779 containerd[1673]: time="2025-01-30T13:51:07.320281508Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:51:08.291322 systemd[1]: Created slice kubepods-besteffort-poda835805e_bba0_433f_94bc_719a445577aa.slice - libcontainer container kubepods-besteffort-poda835805e_bba0_433f_94bc_719a445577aa.slice. Jan 30 13:51:08.306812 kubelet[3113]: I0130 13:51:08.306755 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a835805e-bba0-433f-94bc-719a445577aa-kube-proxy\") pod \"kube-proxy-kjqv7\" (UID: \"a835805e-bba0-433f-94bc-719a445577aa\") " pod="kube-system/kube-proxy-kjqv7" Jan 30 13:51:08.306812 kubelet[3113]: I0130 13:51:08.306799 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a835805e-bba0-433f-94bc-719a445577aa-xtables-lock\") pod \"kube-proxy-kjqv7\" (UID: \"a835805e-bba0-433f-94bc-719a445577aa\") " pod="kube-system/kube-proxy-kjqv7" Jan 30 13:51:08.307049 kubelet[3113]: I0130 13:51:08.306822 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a835805e-bba0-433f-94bc-719a445577aa-lib-modules\") pod \"kube-proxy-kjqv7\" (UID: \"a835805e-bba0-433f-94bc-719a445577aa\") " pod="kube-system/kube-proxy-kjqv7" Jan 30 13:51:08.307049 kubelet[3113]: I0130 13:51:08.306844 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thnqf\" (UniqueName: \"kubernetes.io/projected/a835805e-bba0-433f-94bc-719a445577aa-kube-api-access-thnqf\") pod \"kube-proxy-kjqv7\" (UID: \"a835805e-bba0-433f-94bc-719a445577aa\") " pod="kube-system/kube-proxy-kjqv7" Jan 30 13:51:08.413619 systemd[1]: Created slice kubepods-besteffort-pod25ce2be7_06a1_45ec_b6fb_d69b0f733939.slice - libcontainer container kubepods-besteffort-pod25ce2be7_06a1_45ec_b6fb_d69b0f733939.slice. Jan 30 13:51:08.508841 kubelet[3113]: I0130 13:51:08.508775 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng599\" (UniqueName: \"kubernetes.io/projected/25ce2be7-06a1-45ec-b6fb-d69b0f733939-kube-api-access-ng599\") pod \"tigera-operator-76c4976dd7-xrkt6\" (UID: \"25ce2be7-06a1-45ec-b6fb-d69b0f733939\") " pod="tigera-operator/tigera-operator-76c4976dd7-xrkt6" Jan 30 13:51:08.508841 kubelet[3113]: I0130 13:51:08.508837 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/25ce2be7-06a1-45ec-b6fb-d69b0f733939-var-lib-calico\") pod \"tigera-operator-76c4976dd7-xrkt6\" (UID: \"25ce2be7-06a1-45ec-b6fb-d69b0f733939\") " pod="tigera-operator/tigera-operator-76c4976dd7-xrkt6" Jan 30 13:51:08.599653 containerd[1673]: time="2025-01-30T13:51:08.599592204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjqv7,Uid:a835805e-bba0-433f-94bc-719a445577aa,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:08.647510 containerd[1673]: time="2025-01-30T13:51:08.647389594Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:08.647510 containerd[1673]: time="2025-01-30T13:51:08.647454196Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:08.647510 containerd[1673]: time="2025-01-30T13:51:08.647469496Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:08.647988 containerd[1673]: time="2025-01-30T13:51:08.647565999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:08.674184 systemd[1]: Started cri-containerd-217655baac40abeed8a26b650c5539402215839e91e26040fca48d330643579f.scope - libcontainer container 217655baac40abeed8a26b650c5539402215839e91e26040fca48d330643579f. Jan 30 13:51:08.696315 containerd[1673]: time="2025-01-30T13:51:08.696255311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjqv7,Uid:a835805e-bba0-433f-94bc-719a445577aa,Namespace:kube-system,Attempt:0,} returns sandbox id \"217655baac40abeed8a26b650c5539402215839e91e26040fca48d330643579f\"" Jan 30 13:51:08.699700 containerd[1673]: time="2025-01-30T13:51:08.699647896Z" level=info msg="CreateContainer within sandbox \"217655baac40abeed8a26b650c5539402215839e91e26040fca48d330643579f\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:51:08.717759 containerd[1673]: time="2025-01-30T13:51:08.717701845Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xrkt6,Uid:25ce2be7-06a1-45ec-b6fb-d69b0f733939,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:51:08.740716 containerd[1673]: time="2025-01-30T13:51:08.740660917Z" level=info msg="CreateContainer within sandbox \"217655baac40abeed8a26b650c5539402215839e91e26040fca48d330643579f\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f78027065843b5f6e882b21ba6646cd94239aaaa3fa17d973b56f1f2898ebbb5\"" Jan 30 13:51:08.743013 containerd[1673]: time="2025-01-30T13:51:08.741349634Z" level=info msg="StartContainer for \"f78027065843b5f6e882b21ba6646cd94239aaaa3fa17d973b56f1f2898ebbb5\"" Jan 30 13:51:08.778211 systemd[1]: Started cri-containerd-f78027065843b5f6e882b21ba6646cd94239aaaa3fa17d973b56f1f2898ebbb5.scope - libcontainer container f78027065843b5f6e882b21ba6646cd94239aaaa3fa17d973b56f1f2898ebbb5. Jan 30 13:51:08.789159 containerd[1673]: time="2025-01-30T13:51:08.789046622Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:08.790244 containerd[1673]: time="2025-01-30T13:51:08.790180150Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:08.796028 containerd[1673]: time="2025-01-30T13:51:08.794697663Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:08.796028 containerd[1673]: time="2025-01-30T13:51:08.795576285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:08.829225 systemd[1]: Started cri-containerd-c64961a80eb80deab298a34bf34f5b131ab7ea52ba1cb39ad4e22411085fbd6e.scope - libcontainer container c64961a80eb80deab298a34bf34f5b131ab7ea52ba1cb39ad4e22411085fbd6e. Jan 30 13:51:08.858633 containerd[1673]: time="2025-01-30T13:51:08.857813035Z" level=info msg="StartContainer for \"f78027065843b5f6e882b21ba6646cd94239aaaa3fa17d973b56f1f2898ebbb5\" returns successfully" Jan 30 13:51:08.932926 containerd[1673]: time="2025-01-30T13:51:08.931075159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4976dd7-xrkt6,Uid:25ce2be7-06a1-45ec-b6fb-d69b0f733939,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c64961a80eb80deab298a34bf34f5b131ab7ea52ba1cb39ad4e22411085fbd6e\"" Jan 30 13:51:08.939796 containerd[1673]: time="2025-01-30T13:51:08.938078834Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:51:09.009175 sudo[2208]: pam_unix(sudo:session): session closed for user root Jan 30 13:51:09.124362 sshd[2205]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:09.129737 systemd[1]: sshd@6-10.200.8.14:22-10.200.16.10:51066.service: Deactivated successfully. Jan 30 13:51:09.131846 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:51:09.132109 systemd[1]: session-9.scope: Consumed 4.048s CPU time, 154.9M memory peak, 0B memory swap peak. Jan 30 13:51:09.132753 systemd-logind[1646]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:51:09.133820 systemd-logind[1646]: Removed session 9. Jan 30 13:51:09.467708 kubelet[3113]: I0130 13:51:09.466953 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjqv7" podStartSLOduration=1.466928105 podStartE2EDuration="1.466928105s" podCreationTimestamp="2025-01-30 13:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:51:09.466751201 +0000 UTC m=+6.205687972" watchObservedRunningTime="2025-01-30 13:51:09.466928105 +0000 UTC m=+6.205864976" Jan 30 13:51:10.839326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201334416.mount: Deactivated successfully. Jan 30 13:51:11.439794 containerd[1673]: time="2025-01-30T13:51:11.439727229Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:11.442250 containerd[1673]: time="2025-01-30T13:51:11.442199488Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:51:11.445861 containerd[1673]: time="2025-01-30T13:51:11.445801475Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:11.453724 containerd[1673]: time="2025-01-30T13:51:11.453641063Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:11.454581 containerd[1673]: time="2025-01-30T13:51:11.454422782Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.515317523s" Jan 30 13:51:11.454581 containerd[1673]: time="2025-01-30T13:51:11.454466783Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:51:11.457437 containerd[1673]: time="2025-01-30T13:51:11.457189749Z" level=info msg="CreateContainer within sandbox \"c64961a80eb80deab298a34bf34f5b131ab7ea52ba1cb39ad4e22411085fbd6e\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:51:11.490898 containerd[1673]: time="2025-01-30T13:51:11.490844058Z" level=info msg="CreateContainer within sandbox \"c64961a80eb80deab298a34bf34f5b131ab7ea52ba1cb39ad4e22411085fbd6e\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ab5d60b2f2c9185b2126b8e4a15025292b1a1446e28261c12cc9fcfc71fe9e97\"" Jan 30 13:51:11.492321 containerd[1673]: time="2025-01-30T13:51:11.491287669Z" level=info msg="StartContainer for \"ab5d60b2f2c9185b2126b8e4a15025292b1a1446e28261c12cc9fcfc71fe9e97\"" Jan 30 13:51:11.523173 systemd[1]: Started cri-containerd-ab5d60b2f2c9185b2126b8e4a15025292b1a1446e28261c12cc9fcfc71fe9e97.scope - libcontainer container ab5d60b2f2c9185b2126b8e4a15025292b1a1446e28261c12cc9fcfc71fe9e97. Jan 30 13:51:11.549871 containerd[1673]: time="2025-01-30T13:51:11.549809276Z" level=info msg="StartContainer for \"ab5d60b2f2c9185b2126b8e4a15025292b1a1446e28261c12cc9fcfc71fe9e97\" returns successfully" Jan 30 13:51:12.467955 kubelet[3113]: I0130 13:51:12.467779 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4976dd7-xrkt6" podStartSLOduration=1.9487561420000001 podStartE2EDuration="4.467752855s" podCreationTimestamp="2025-01-30 13:51:08 +0000 UTC" firstStartedPulling="2025-01-30 13:51:08.936539996 +0000 UTC m=+5.675476867" lastFinishedPulling="2025-01-30 13:51:11.455536809 +0000 UTC m=+8.194473580" observedRunningTime="2025-01-30 13:51:12.46713474 +0000 UTC m=+9.206071511" watchObservedRunningTime="2025-01-30 13:51:12.467752855 +0000 UTC m=+9.206689626" Jan 30 13:51:14.724614 systemd[1]: Created slice kubepods-besteffort-pod15229b91_3171_4a6d_bc5e_9d7f16114666.slice - libcontainer container kubepods-besteffort-pod15229b91_3171_4a6d_bc5e_9d7f16114666.slice. Jan 30 13:51:14.744137 kubelet[3113]: I0130 13:51:14.743948 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svmg8\" (UniqueName: \"kubernetes.io/projected/15229b91-3171-4a6d-bc5e-9d7f16114666-kube-api-access-svmg8\") pod \"calico-typha-867fd975d4-h8vp9\" (UID: \"15229b91-3171-4a6d-bc5e-9d7f16114666\") " pod="calico-system/calico-typha-867fd975d4-h8vp9" Jan 30 13:51:14.745340 kubelet[3113]: I0130 13:51:14.745137 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/15229b91-3171-4a6d-bc5e-9d7f16114666-typha-certs\") pod \"calico-typha-867fd975d4-h8vp9\" (UID: \"15229b91-3171-4a6d-bc5e-9d7f16114666\") " pod="calico-system/calico-typha-867fd975d4-h8vp9" Jan 30 13:51:14.745340 kubelet[3113]: I0130 13:51:14.745187 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15229b91-3171-4a6d-bc5e-9d7f16114666-tigera-ca-bundle\") pod \"calico-typha-867fd975d4-h8vp9\" (UID: \"15229b91-3171-4a6d-bc5e-9d7f16114666\") " pod="calico-system/calico-typha-867fd975d4-h8vp9" Jan 30 13:51:14.823319 systemd[1]: Created slice kubepods-besteffort-pod13aefa74_e574_433c_97ec_9a7237917ee6.slice - libcontainer container kubepods-besteffort-pod13aefa74_e574_433c_97ec_9a7237917ee6.slice. Jan 30 13:51:14.846440 kubelet[3113]: I0130 13:51:14.846381 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-net-dir\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846619 kubelet[3113]: I0130 13:51:14.846455 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-lib-calico\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846619 kubelet[3113]: I0130 13:51:14.846497 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13aefa74-e574-433c-97ec-9a7237917ee6-tigera-ca-bundle\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846619 kubelet[3113]: I0130 13:51:14.846521 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13aefa74-e574-433c-97ec-9a7237917ee6-node-certs\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846619 kubelet[3113]: I0130 13:51:14.846555 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-policysync\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846619 kubelet[3113]: I0130 13:51:14.846584 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-run-calico\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846836 kubelet[3113]: I0130 13:51:14.846623 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-xtables-lock\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846836 kubelet[3113]: I0130 13:51:14.846645 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-log-dir\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846836 kubelet[3113]: I0130 13:51:14.846667 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-lib-modules\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846836 kubelet[3113]: I0130 13:51:14.846688 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-bin-dir\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.846836 kubelet[3113]: I0130 13:51:14.846711 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-flexvol-driver-host\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.847036 kubelet[3113]: I0130 13:51:14.846734 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sx57z\" (UniqueName: \"kubernetes.io/projected/13aefa74-e574-433c-97ec-9a7237917ee6-kube-api-access-sx57z\") pod \"calico-node-xfthv\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " pod="calico-system/calico-node-xfthv" Jan 30 13:51:14.945942 kubelet[3113]: E0130 13:51:14.945709 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:14.950888 kubelet[3113]: E0130 13:51:14.950859 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.951371 kubelet[3113]: W0130 13:51:14.951128 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.951371 kubelet[3113]: E0130 13:51:14.951164 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.951748 kubelet[3113]: E0130 13:51:14.951721 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.951924 kubelet[3113]: W0130 13:51:14.951856 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.952057 kubelet[3113]: E0130 13:51:14.951984 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.952971 kubelet[3113]: E0130 13:51:14.952847 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.952971 kubelet[3113]: W0130 13:51:14.952863 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.953409 kubelet[3113]: E0130 13:51:14.953306 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.953521 kubelet[3113]: E0130 13:51:14.953508 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.953604 kubelet[3113]: W0130 13:51:14.953592 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.954086 kubelet[3113]: E0130 13:51:14.954068 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.954331 kubelet[3113]: E0130 13:51:14.954244 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.954331 kubelet[3113]: W0130 13:51:14.954257 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.954683 kubelet[3113]: E0130 13:51:14.954526 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.955229 kubelet[3113]: E0130 13:51:14.955049 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.955229 kubelet[3113]: W0130 13:51:14.955066 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.955229 kubelet[3113]: E0130 13:51:14.955089 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.955587 kubelet[3113]: E0130 13:51:14.955574 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.955747 kubelet[3113]: W0130 13:51:14.955647 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.955908 kubelet[3113]: E0130 13:51:14.955814 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.956143 kubelet[3113]: E0130 13:51:14.956130 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.957176 kubelet[3113]: W0130 13:51:14.957044 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.957313 kubelet[3113]: E0130 13:51:14.957287 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.957463 kubelet[3113]: E0130 13:51:14.957425 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.957463 kubelet[3113]: W0130 13:51:14.957437 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.957644 kubelet[3113]: E0130 13:51:14.957552 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.957897 kubelet[3113]: E0130 13:51:14.957882 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.957980 kubelet[3113]: W0130 13:51:14.957898 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.957980 kubelet[3113]: E0130 13:51:14.957912 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.958504 kubelet[3113]: E0130 13:51:14.958486 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.958504 kubelet[3113]: W0130 13:51:14.958504 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.958619 kubelet[3113]: E0130 13:51:14.958518 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.967792 kubelet[3113]: E0130 13:51:14.967757 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.967792 kubelet[3113]: W0130 13:51:14.967777 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.967792 kubelet[3113]: E0130 13:51:14.967793 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:14.982664 kubelet[3113]: E0130 13:51:14.982551 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:14.982664 kubelet[3113]: W0130 13:51:14.982581 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:14.982664 kubelet[3113]: E0130 13:51:14.982605 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.026563 kubelet[3113]: E0130 13:51:15.026519 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.026563 kubelet[3113]: W0130 13:51:15.026550 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.026930 kubelet[3113]: E0130 13:51:15.026577 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.026930 kubelet[3113]: E0130 13:51:15.026839 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.026930 kubelet[3113]: W0130 13:51:15.026850 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.026930 kubelet[3113]: E0130 13:51:15.026864 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.028116 kubelet[3113]: E0130 13:51:15.028081 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.028116 kubelet[3113]: W0130 13:51:15.028102 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.028431 kubelet[3113]: E0130 13:51:15.028117 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.028431 kubelet[3113]: E0130 13:51:15.028362 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.028431 kubelet[3113]: W0130 13:51:15.028373 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.028431 kubelet[3113]: E0130 13:51:15.028387 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.028834 kubelet[3113]: E0130 13:51:15.028612 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.028834 kubelet[3113]: W0130 13:51:15.028623 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.028834 kubelet[3113]: E0130 13:51:15.028636 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.028983 kubelet[3113]: E0130 13:51:15.028835 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.028983 kubelet[3113]: W0130 13:51:15.028848 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.028983 kubelet[3113]: E0130 13:51:15.028861 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.029127 kubelet[3113]: E0130 13:51:15.029087 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.029127 kubelet[3113]: W0130 13:51:15.029098 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.029127 kubelet[3113]: E0130 13:51:15.029111 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029308 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.030055 kubelet[3113]: W0130 13:51:15.029318 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029329 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029563 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.030055 kubelet[3113]: W0130 13:51:15.029572 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029583 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029759 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.030055 kubelet[3113]: W0130 13:51:15.029766 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029778 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.030055 kubelet[3113]: E0130 13:51:15.029948 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.030571 kubelet[3113]: W0130 13:51:15.029956 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.030571 kubelet[3113]: E0130 13:51:15.029966 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.030571 kubelet[3113]: E0130 13:51:15.030193 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.030571 kubelet[3113]: W0130 13:51:15.030203 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.031051 kubelet[3113]: E0130 13:51:15.031028 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.031329 kubelet[3113]: E0130 13:51:15.031310 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.031329 kubelet[3113]: W0130 13:51:15.031327 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.032135 kubelet[3113]: E0130 13:51:15.031341 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.032135 kubelet[3113]: E0130 13:51:15.031686 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.032135 kubelet[3113]: W0130 13:51:15.031697 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.032135 kubelet[3113]: E0130 13:51:15.031710 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.032135 kubelet[3113]: E0130 13:51:15.031918 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.032135 kubelet[3113]: W0130 13:51:15.031927 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.032135 kubelet[3113]: E0130 13:51:15.031939 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.032396 kubelet[3113]: E0130 13:51:15.032176 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.032396 kubelet[3113]: W0130 13:51:15.032186 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.032396 kubelet[3113]: E0130 13:51:15.032198 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.033021 containerd[1673]: time="2025-01-30T13:51:15.032946654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867fd975d4-h8vp9,Uid:15229b91-3171-4a6d-bc5e-9d7f16114666,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:15.034635 kubelet[3113]: E0130 13:51:15.034109 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.034635 kubelet[3113]: W0130 13:51:15.034122 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.034635 kubelet[3113]: E0130 13:51:15.034136 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.034635 kubelet[3113]: E0130 13:51:15.034372 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.034635 kubelet[3113]: W0130 13:51:15.034384 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.034635 kubelet[3113]: E0130 13:51:15.034398 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.034635 kubelet[3113]: E0130 13:51:15.034611 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.034635 kubelet[3113]: W0130 13:51:15.034622 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.034635 kubelet[3113]: E0130 13:51:15.034634 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.035053 kubelet[3113]: E0130 13:51:15.034862 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.035053 kubelet[3113]: W0130 13:51:15.034872 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.035053 kubelet[3113]: E0130 13:51:15.034885 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.049166 kubelet[3113]: E0130 13:51:15.049128 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.049166 kubelet[3113]: W0130 13:51:15.049161 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.052041 kubelet[3113]: E0130 13:51:15.049188 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.052041 kubelet[3113]: I0130 13:51:15.049228 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1654d24a-276c-4733-ab3c-b2a324f91922-kubelet-dir\") pod \"csi-node-driver-mcmv9\" (UID: \"1654d24a-276c-4733-ab3c-b2a324f91922\") " pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:15.052041 kubelet[3113]: E0130 13:51:15.049492 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.052041 kubelet[3113]: W0130 13:51:15.049508 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.052041 kubelet[3113]: E0130 13:51:15.049542 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.052041 kubelet[3113]: I0130 13:51:15.049568 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/1654d24a-276c-4733-ab3c-b2a324f91922-varrun\") pod \"csi-node-driver-mcmv9\" (UID: \"1654d24a-276c-4733-ab3c-b2a324f91922\") " pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:15.052041 kubelet[3113]: E0130 13:51:15.049796 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.052041 kubelet[3113]: W0130 13:51:15.049808 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.052041 kubelet[3113]: E0130 13:51:15.050142 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.052502 kubelet[3113]: E0130 13:51:15.050408 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.052502 kubelet[3113]: W0130 13:51:15.050420 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.052502 kubelet[3113]: E0130 13:51:15.050446 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.052502 kubelet[3113]: E0130 13:51:15.050684 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.052502 kubelet[3113]: W0130 13:51:15.050698 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.052502 kubelet[3113]: E0130 13:51:15.050727 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.052502 kubelet[3113]: I0130 13:51:15.050750 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1654d24a-276c-4733-ab3c-b2a324f91922-socket-dir\") pod \"csi-node-driver-mcmv9\" (UID: \"1654d24a-276c-4733-ab3c-b2a324f91922\") " pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:15.052502 kubelet[3113]: E0130 13:51:15.050989 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.052502 kubelet[3113]: W0130 13:51:15.051019 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053183 kubelet[3113]: E0130 13:51:15.051056 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053183 kubelet[3113]: E0130 13:51:15.051303 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053183 kubelet[3113]: W0130 13:51:15.051314 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053183 kubelet[3113]: E0130 13:51:15.051342 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053183 kubelet[3113]: E0130 13:51:15.051610 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053183 kubelet[3113]: W0130 13:51:15.051622 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053183 kubelet[3113]: E0130 13:51:15.051640 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053183 kubelet[3113]: I0130 13:51:15.051664 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wr2ft\" (UniqueName: \"kubernetes.io/projected/1654d24a-276c-4733-ab3c-b2a324f91922-kube-api-access-wr2ft\") pod \"csi-node-driver-mcmv9\" (UID: \"1654d24a-276c-4733-ab3c-b2a324f91922\") " pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:15.053183 kubelet[3113]: E0130 13:51:15.051904 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053683 kubelet[3113]: W0130 13:51:15.051916 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053683 kubelet[3113]: E0130 13:51:15.051936 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053683 kubelet[3113]: E0130 13:51:15.052193 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053683 kubelet[3113]: W0130 13:51:15.052205 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053683 kubelet[3113]: E0130 13:51:15.052222 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053683 kubelet[3113]: E0130 13:51:15.052434 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053683 kubelet[3113]: W0130 13:51:15.052444 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053683 kubelet[3113]: E0130 13:51:15.052459 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053683 kubelet[3113]: I0130 13:51:15.052481 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1654d24a-276c-4733-ab3c-b2a324f91922-registration-dir\") pod \"csi-node-driver-mcmv9\" (UID: \"1654d24a-276c-4733-ab3c-b2a324f91922\") " pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.052719 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053975 kubelet[3113]: W0130 13:51:15.052732 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.052754 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.053028 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053975 kubelet[3113]: W0130 13:51:15.053041 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.053142 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.053319 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.053975 kubelet[3113]: W0130 13:51:15.053330 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.053343 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.053975 kubelet[3113]: E0130 13:51:15.053614 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.055140 kubelet[3113]: W0130 13:51:15.053625 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.055140 kubelet[3113]: E0130 13:51:15.053639 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.091031 containerd[1673]: time="2025-01-30T13:51:15.090751245Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:15.091031 containerd[1673]: time="2025-01-30T13:51:15.090862447Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:15.091031 containerd[1673]: time="2025-01-30T13:51:15.090954749Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:15.091379 containerd[1673]: time="2025-01-30T13:51:15.091160054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:15.118243 systemd[1]: Started cri-containerd-d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b.scope - libcontainer container d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b. Jan 30 13:51:15.127589 containerd[1673]: time="2025-01-30T13:51:15.127540529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xfthv,Uid:13aefa74-e574-433c-97ec-9a7237917ee6,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:15.153439 kubelet[3113]: E0130 13:51:15.153398 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.153618 kubelet[3113]: W0130 13:51:15.153514 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.153618 kubelet[3113]: E0130 13:51:15.153546 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.154332 kubelet[3113]: E0130 13:51:15.154084 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.154332 kubelet[3113]: W0130 13:51:15.154128 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.154332 kubelet[3113]: E0130 13:51:15.154156 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.154803 kubelet[3113]: E0130 13:51:15.154477 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.154803 kubelet[3113]: W0130 13:51:15.154490 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.154803 kubelet[3113]: E0130 13:51:15.154708 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.154930 kubelet[3113]: E0130 13:51:15.154813 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.154930 kubelet[3113]: W0130 13:51:15.154823 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.155023 kubelet[3113]: E0130 13:51:15.154848 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.156028 kubelet[3113]: E0130 13:51:15.155257 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.156028 kubelet[3113]: W0130 13:51:15.155274 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.156028 kubelet[3113]: E0130 13:51:15.155316 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.156028 kubelet[3113]: E0130 13:51:15.155728 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.156028 kubelet[3113]: W0130 13:51:15.155741 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.156292 kubelet[3113]: E0130 13:51:15.155765 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.156823 kubelet[3113]: E0130 13:51:15.156551 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.156823 kubelet[3113]: W0130 13:51:15.156566 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.156823 kubelet[3113]: E0130 13:51:15.156585 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.157837 kubelet[3113]: E0130 13:51:15.157815 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.157837 kubelet[3113]: W0130 13:51:15.157834 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.157968 kubelet[3113]: E0130 13:51:15.157891 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.158503 kubelet[3113]: E0130 13:51:15.158295 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.158503 kubelet[3113]: W0130 13:51:15.158310 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.158503 kubelet[3113]: E0130 13:51:15.158492 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.159029 kubelet[3113]: E0130 13:51:15.158749 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.159029 kubelet[3113]: W0130 13:51:15.158762 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.159029 kubelet[3113]: E0130 13:51:15.158781 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.159725 kubelet[3113]: E0130 13:51:15.159560 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.159725 kubelet[3113]: W0130 13:51:15.159577 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.160903 kubelet[3113]: E0130 13:51:15.160715 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.160903 kubelet[3113]: E0130 13:51:15.160880 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.160903 kubelet[3113]: W0130 13:51:15.160892 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.161075 kubelet[3113]: E0130 13:51:15.161029 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.163019 kubelet[3113]: E0130 13:51:15.161916 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.163019 kubelet[3113]: W0130 13:51:15.161934 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.163019 kubelet[3113]: E0130 13:51:15.162776 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.164788 kubelet[3113]: E0130 13:51:15.164762 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.164788 kubelet[3113]: W0130 13:51:15.164787 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.167201 kubelet[3113]: E0130 13:51:15.167178 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.167286 kubelet[3113]: W0130 13:51:15.167208 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.169550 kubelet[3113]: E0130 13:51:15.169512 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.169550 kubelet[3113]: E0130 13:51:15.169546 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.174055 kubelet[3113]: E0130 13:51:15.173970 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.177070 kubelet[3113]: W0130 13:51:15.173998 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.180239 kubelet[3113]: E0130 13:51:15.178194 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.180239 kubelet[3113]: W0130 13:51:15.178212 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.180239 kubelet[3113]: E0130 13:51:15.178768 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.180239 kubelet[3113]: W0130 13:51:15.178780 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.180239 kubelet[3113]: E0130 13:51:15.179780 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.180239 kubelet[3113]: W0130 13:51:15.179795 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.180239 kubelet[3113]: E0130 13:51:15.179818 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.180239 kubelet[3113]: E0130 13:51:15.180112 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.185153 kubelet[3113]: E0130 13:51:15.184604 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.185153 kubelet[3113]: W0130 13:51:15.184625 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.185153 kubelet[3113]: E0130 13:51:15.184643 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.185697 kubelet[3113]: E0130 13:51:15.185497 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.185697 kubelet[3113]: W0130 13:51:15.185516 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.185697 kubelet[3113]: E0130 13:51:15.185533 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.189036 kubelet[3113]: E0130 13:51:15.187645 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.189036 kubelet[3113]: W0130 13:51:15.187662 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.189036 kubelet[3113]: E0130 13:51:15.187685 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.191590 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.192117 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.197102 kubelet[3113]: W0130 13:51:15.192131 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.192148 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.193068 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.197102 kubelet[3113]: W0130 13:51:15.193082 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.193099 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.193141 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.197102 kubelet[3113]: E0130 13:51:15.193379 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.197102 kubelet[3113]: W0130 13:51:15.193397 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.197515 kubelet[3113]: E0130 13:51:15.193411 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.208923 kubelet[3113]: E0130 13:51:15.208891 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:15.208923 kubelet[3113]: W0130 13:51:15.208918 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:15.209159 kubelet[3113]: E0130 13:51:15.208943 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:15.218381 containerd[1673]: time="2025-01-30T13:51:15.215124436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:51:15.218381 containerd[1673]: time="2025-01-30T13:51:15.215208438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:51:15.218381 containerd[1673]: time="2025-01-30T13:51:15.215230839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:15.218381 containerd[1673]: time="2025-01-30T13:51:15.215365642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:51:15.253168 systemd[1]: Started cri-containerd-4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e.scope - libcontainer container 4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e. Jan 30 13:51:15.281691 containerd[1673]: time="2025-01-30T13:51:15.281500433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-867fd975d4-h8vp9,Uid:15229b91-3171-4a6d-bc5e-9d7f16114666,Namespace:calico-system,Attempt:0,} returns sandbox id \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\"" Jan 30 13:51:15.286063 containerd[1673]: time="2025-01-30T13:51:15.284653208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:51:15.372742 containerd[1673]: time="2025-01-30T13:51:15.372666625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-xfthv,Uid:13aefa74-e574-433c-97ec-9a7237917ee6,Namespace:calico-system,Attempt:0,} returns sandbox id \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\"" Jan 30 13:51:16.375058 kubelet[3113]: E0130 13:51:16.374977 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:16.836214 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346879496.mount: Deactivated successfully. Jan 30 13:51:17.854750 containerd[1673]: time="2025-01-30T13:51:17.854694811Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.857678 containerd[1673]: time="2025-01-30T13:51:17.857614480Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:51:17.860919 containerd[1673]: time="2025-01-30T13:51:17.860865958Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.866607 containerd[1673]: time="2025-01-30T13:51:17.866555193Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:17.867591 containerd[1673]: time="2025-01-30T13:51:17.867143907Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.582446998s" Jan 30 13:51:17.867591 containerd[1673]: time="2025-01-30T13:51:17.867186909Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:51:17.868781 containerd[1673]: time="2025-01-30T13:51:17.868749946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:51:17.882825 containerd[1673]: time="2025-01-30T13:51:17.882768980Z" level=info msg="CreateContainer within sandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:51:17.934418 containerd[1673]: time="2025-01-30T13:51:17.934229006Z" level=info msg="CreateContainer within sandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\"" Jan 30 13:51:17.935433 containerd[1673]: time="2025-01-30T13:51:17.934934922Z" level=info msg="StartContainer for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\"" Jan 30 13:51:17.980167 systemd[1]: Started cri-containerd-064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331.scope - libcontainer container 064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331. Jan 30 13:51:18.023277 containerd[1673]: time="2025-01-30T13:51:18.023176124Z" level=info msg="StartContainer for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" returns successfully" Jan 30 13:51:18.375509 kubelet[3113]: E0130 13:51:18.375416 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:18.476339 kubelet[3113]: I0130 13:51:18.475441 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-867fd975d4-h8vp9" podStartSLOduration=1.8914085630000002 podStartE2EDuration="4.475417798s" podCreationTimestamp="2025-01-30 13:51:14 +0000 UTC" firstStartedPulling="2025-01-30 13:51:15.284218398 +0000 UTC m=+12.023155169" lastFinishedPulling="2025-01-30 13:51:17.868227633 +0000 UTC m=+14.607164404" observedRunningTime="2025-01-30 13:51:18.474395573 +0000 UTC m=+15.213332444" watchObservedRunningTime="2025-01-30 13:51:18.475417798 +0000 UTC m=+15.214354569" Jan 30 13:51:18.562284 kubelet[3113]: E0130 13:51:18.562246 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.562284 kubelet[3113]: W0130 13:51:18.562271 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.562592 kubelet[3113]: E0130 13:51:18.562299 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.562592 kubelet[3113]: E0130 13:51:18.562567 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.562592 kubelet[3113]: W0130 13:51:18.562581 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.562820 kubelet[3113]: E0130 13:51:18.562599 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.562820 kubelet[3113]: E0130 13:51:18.562801 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.562820 kubelet[3113]: W0130 13:51:18.562813 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.562997 kubelet[3113]: E0130 13:51:18.562826 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.563120 kubelet[3113]: E0130 13:51:18.563033 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.563120 kubelet[3113]: W0130 13:51:18.563044 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.563120 kubelet[3113]: E0130 13:51:18.563057 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.563304 kubelet[3113]: E0130 13:51:18.563261 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.563304 kubelet[3113]: W0130 13:51:18.563272 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.563304 kubelet[3113]: E0130 13:51:18.563284 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.563479 kubelet[3113]: E0130 13:51:18.563471 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.563533 kubelet[3113]: W0130 13:51:18.563482 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.563533 kubelet[3113]: E0130 13:51:18.563495 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.563697 kubelet[3113]: E0130 13:51:18.563678 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.563697 kubelet[3113]: W0130 13:51:18.563687 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.563845 kubelet[3113]: E0130 13:51:18.563699 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.563915 kubelet[3113]: E0130 13:51:18.563886 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.563915 kubelet[3113]: W0130 13:51:18.563896 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.563915 kubelet[3113]: E0130 13:51:18.563908 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.564141 kubelet[3113]: E0130 13:51:18.564122 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.564141 kubelet[3113]: W0130 13:51:18.564140 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.564268 kubelet[3113]: E0130 13:51:18.564155 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.564380 kubelet[3113]: E0130 13:51:18.564349 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.564380 kubelet[3113]: W0130 13:51:18.564361 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.564380 kubelet[3113]: E0130 13:51:18.564373 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.564569 kubelet[3113]: E0130 13:51:18.564554 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.564569 kubelet[3113]: W0130 13:51:18.564564 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.564700 kubelet[3113]: E0130 13:51:18.564576 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.564858 kubelet[3113]: E0130 13:51:18.564763 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.564858 kubelet[3113]: W0130 13:51:18.564772 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.564858 kubelet[3113]: E0130 13:51:18.564789 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.565065 kubelet[3113]: E0130 13:51:18.564989 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.565065 kubelet[3113]: W0130 13:51:18.565015 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.565065 kubelet[3113]: E0130 13:51:18.565031 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.565268 kubelet[3113]: E0130 13:51:18.565225 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.565268 kubelet[3113]: W0130 13:51:18.565237 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.565268 kubelet[3113]: E0130 13:51:18.565249 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.565453 kubelet[3113]: E0130 13:51:18.565427 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.565453 kubelet[3113]: W0130 13:51:18.565436 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.565453 kubelet[3113]: E0130 13:51:18.565451 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.586910 kubelet[3113]: E0130 13:51:18.586884 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.586910 kubelet[3113]: W0130 13:51:18.586901 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.587065 kubelet[3113]: E0130 13:51:18.586918 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.587230 kubelet[3113]: E0130 13:51:18.587212 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.587230 kubelet[3113]: W0130 13:51:18.587228 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.587324 kubelet[3113]: E0130 13:51:18.587247 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.587529 kubelet[3113]: E0130 13:51:18.587494 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.587529 kubelet[3113]: W0130 13:51:18.587506 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.587621 kubelet[3113]: E0130 13:51:18.587531 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.587794 kubelet[3113]: E0130 13:51:18.587776 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.587794 kubelet[3113]: W0130 13:51:18.587790 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.587902 kubelet[3113]: E0130 13:51:18.587820 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.588063 kubelet[3113]: E0130 13:51:18.588047 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.588063 kubelet[3113]: W0130 13:51:18.588060 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.588158 kubelet[3113]: E0130 13:51:18.588085 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.588300 kubelet[3113]: E0130 13:51:18.588285 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.588300 kubelet[3113]: W0130 13:51:18.588297 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.588398 kubelet[3113]: E0130 13:51:18.588315 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.588565 kubelet[3113]: E0130 13:51:18.588549 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.588565 kubelet[3113]: W0130 13:51:18.588562 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.588681 kubelet[3113]: E0130 13:51:18.588658 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.588982 kubelet[3113]: E0130 13:51:18.588887 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.588982 kubelet[3113]: W0130 13:51:18.588903 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.588982 kubelet[3113]: E0130 13:51:18.588943 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.589182 kubelet[3113]: E0130 13:51:18.589111 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.589182 kubelet[3113]: W0130 13:51:18.589121 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.589182 kubelet[3113]: E0130 13:51:18.589139 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.589393 kubelet[3113]: E0130 13:51:18.589374 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.589393 kubelet[3113]: W0130 13:51:18.589390 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.589494 kubelet[3113]: E0130 13:51:18.589416 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.589638 kubelet[3113]: E0130 13:51:18.589622 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.589638 kubelet[3113]: W0130 13:51:18.589634 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.589743 kubelet[3113]: E0130 13:51:18.589652 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.589917 kubelet[3113]: E0130 13:51:18.589901 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.589917 kubelet[3113]: W0130 13:51:18.589914 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.590048 kubelet[3113]: E0130 13:51:18.589940 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.590421 kubelet[3113]: E0130 13:51:18.590350 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.590421 kubelet[3113]: W0130 13:51:18.590367 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.590421 kubelet[3113]: E0130 13:51:18.590382 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.590655 kubelet[3113]: E0130 13:51:18.590573 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.590655 kubelet[3113]: W0130 13:51:18.590584 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.590655 kubelet[3113]: E0130 13:51:18.590610 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.590827 kubelet[3113]: E0130 13:51:18.590763 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.590827 kubelet[3113]: W0130 13:51:18.590772 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.590827 kubelet[3113]: E0130 13:51:18.590784 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.590986 kubelet[3113]: E0130 13:51:18.590957 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.590986 kubelet[3113]: W0130 13:51:18.590968 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.590986 kubelet[3113]: E0130 13:51:18.590980 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.591238 kubelet[3113]: E0130 13:51:18.591221 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.591238 kubelet[3113]: W0130 13:51:18.591235 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.591342 kubelet[3113]: E0130 13:51:18.591248 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:18.591590 kubelet[3113]: E0130 13:51:18.591572 3113 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:51:18.591590 kubelet[3113]: W0130 13:51:18.591586 3113 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:51:18.591668 kubelet[3113]: E0130 13:51:18.591599 3113 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:51:19.164589 containerd[1673]: time="2025-01-30T13:51:19.164521413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:19.166488 containerd[1673]: time="2025-01-30T13:51:19.166420458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:51:19.169561 containerd[1673]: time="2025-01-30T13:51:19.169494032Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:19.174816 containerd[1673]: time="2025-01-30T13:51:19.174052240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:19.174816 containerd[1673]: time="2025-01-30T13:51:19.174636154Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.305845508s" Jan 30 13:51:19.174816 containerd[1673]: time="2025-01-30T13:51:19.174669755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:51:19.176973 containerd[1673]: time="2025-01-30T13:51:19.176942509Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:51:19.252062 containerd[1673]: time="2025-01-30T13:51:19.251989897Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614\"" Jan 30 13:51:19.252782 containerd[1673]: time="2025-01-30T13:51:19.252634012Z" level=info msg="StartContainer for \"584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614\"" Jan 30 13:51:19.294154 systemd[1]: Started cri-containerd-584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614.scope - libcontainer container 584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614. Jan 30 13:51:19.329639 containerd[1673]: time="2025-01-30T13:51:19.329498043Z" level=info msg="StartContainer for \"584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614\" returns successfully" Jan 30 13:51:19.342179 systemd[1]: cri-containerd-584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614.scope: Deactivated successfully. Jan 30 13:51:19.366933 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614-rootfs.mount: Deactivated successfully. Jan 30 13:51:19.466073 kubelet[3113]: I0130 13:51:19.465507 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:51:20.375557 kubelet[3113]: E0130 13:51:20.375467 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:21.882089 containerd[1673]: time="2025-01-30T13:51:21.881969947Z" level=info msg="shim disconnected" id=584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614 namespace=k8s.io Jan 30 13:51:21.882089 containerd[1673]: time="2025-01-30T13:51:21.882070250Z" level=warning msg="cleaning up after shim disconnected" id=584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614 namespace=k8s.io Jan 30 13:51:21.882089 containerd[1673]: time="2025-01-30T13:51:21.882084550Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:22.375409 kubelet[3113]: E0130 13:51:22.375329 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:22.476888 containerd[1673]: time="2025-01-30T13:51:22.476806417Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:51:24.375888 kubelet[3113]: E0130 13:51:24.375820 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:26.375667 kubelet[3113]: E0130 13:51:26.375607 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:26.525557 containerd[1673]: time="2025-01-30T13:51:26.525496017Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.527536 containerd[1673]: time="2025-01-30T13:51:26.527471864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:51:26.529800 containerd[1673]: time="2025-01-30T13:51:26.529732518Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.534274 containerd[1673]: time="2025-01-30T13:51:26.534216924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:26.535449 containerd[1673]: time="2025-01-30T13:51:26.534884140Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.058025522s" Jan 30 13:51:26.535449 containerd[1673]: time="2025-01-30T13:51:26.534924941Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:51:26.537583 containerd[1673]: time="2025-01-30T13:51:26.537553603Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:51:26.573109 containerd[1673]: time="2025-01-30T13:51:26.573059147Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d\"" Jan 30 13:51:26.575062 containerd[1673]: time="2025-01-30T13:51:26.573707262Z" level=info msg="StartContainer for \"cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d\"" Jan 30 13:51:26.612644 systemd[1]: run-containerd-runc-k8s.io-cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d-runc.BQaKiF.mount: Deactivated successfully. Jan 30 13:51:26.620189 systemd[1]: Started cri-containerd-cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d.scope - libcontainer container cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d. Jan 30 13:51:26.650053 containerd[1673]: time="2025-01-30T13:51:26.649677167Z" level=info msg="StartContainer for \"cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d\" returns successfully" Jan 30 13:51:28.046126 containerd[1673]: time="2025-01-30T13:51:28.046054740Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 30 13:51:28.047926 systemd[1]: cri-containerd-cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d.scope: Deactivated successfully. Jan 30 13:51:28.071584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d-rootfs.mount: Deactivated successfully. Jan 30 13:51:28.107149 kubelet[3113]: I0130 13:51:28.105552 3113 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 30 13:51:28.629144 kubelet[3113]: I0130 13:51:28.256153 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r5gb\" (UniqueName: \"kubernetes.io/projected/7c2a0430-bfea-48a8-b9a0-8ea183a3114a-kube-api-access-6r5gb\") pod \"coredns-6f6b679f8f-z8997\" (UID: \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\") " pod="kube-system/coredns-6f6b679f8f-z8997" Jan 30 13:51:28.629144 kubelet[3113]: I0130 13:51:28.256223 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrb5d\" (UniqueName: \"kubernetes.io/projected/f7fbcb71-4682-4a9b-9734-5a668b2754b3-kube-api-access-lrb5d\") pod \"coredns-6f6b679f8f-4g4nh\" (UID: \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\") " pod="kube-system/coredns-6f6b679f8f-4g4nh" Jan 30 13:51:28.629144 kubelet[3113]: I0130 13:51:28.256244 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g44kr\" (UniqueName: \"kubernetes.io/projected/7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1-kube-api-access-g44kr\") pod \"calico-apiserver-5df8c6b8fc-2m7hp\" (UID: \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\") " pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" Jan 30 13:51:28.629144 kubelet[3113]: I0130 13:51:28.256268 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6962ed73-d508-4340-81ff-7f3201a82a70-tigera-ca-bundle\") pod \"calico-kube-controllers-659c567d5c-h72ss\" (UID: \"6962ed73-d508-4340-81ff-7f3201a82a70\") " pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" Jan 30 13:51:28.629144 kubelet[3113]: I0130 13:51:28.256294 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f7fbcb71-4682-4a9b-9734-5a668b2754b3-config-volume\") pod \"coredns-6f6b679f8f-4g4nh\" (UID: \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\") " pod="kube-system/coredns-6f6b679f8f-4g4nh" Jan 30 13:51:28.156138 systemd[1]: Created slice kubepods-burstable-pod7c2a0430_bfea_48a8_b9a0_8ea183a3114a.slice - libcontainer container kubepods-burstable-pod7c2a0430_bfea_48a8_b9a0_8ea183a3114a.slice. Jan 30 13:51:28.629568 kubelet[3113]: I0130 13:51:28.256311 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7c2a0430-bfea-48a8-b9a0-8ea183a3114a-config-volume\") pod \"coredns-6f6b679f8f-z8997\" (UID: \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\") " pod="kube-system/coredns-6f6b679f8f-z8997" Jan 30 13:51:28.629568 kubelet[3113]: I0130 13:51:28.256331 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7245x\" (UniqueName: \"kubernetes.io/projected/d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7-kube-api-access-7245x\") pod \"calico-apiserver-5df8c6b8fc-vtx2w\" (UID: \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\") " pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" Jan 30 13:51:28.629568 kubelet[3113]: I0130 13:51:28.256357 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7-calico-apiserver-certs\") pod \"calico-apiserver-5df8c6b8fc-vtx2w\" (UID: \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\") " pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" Jan 30 13:51:28.629568 kubelet[3113]: I0130 13:51:28.256380 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1-calico-apiserver-certs\") pod \"calico-apiserver-5df8c6b8fc-2m7hp\" (UID: \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\") " pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" Jan 30 13:51:28.629568 kubelet[3113]: I0130 13:51:28.256395 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl9f9\" (UniqueName: \"kubernetes.io/projected/6962ed73-d508-4340-81ff-7f3201a82a70-kube-api-access-bl9f9\") pod \"calico-kube-controllers-659c567d5c-h72ss\" (UID: \"6962ed73-d508-4340-81ff-7f3201a82a70\") " pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" Jan 30 13:51:28.171314 systemd[1]: Created slice kubepods-burstable-podf7fbcb71_4682_4a9b_9734_5a668b2754b3.slice - libcontainer container kubepods-burstable-podf7fbcb71_4682_4a9b_9734_5a668b2754b3.slice. Jan 30 13:51:28.179198 systemd[1]: Created slice kubepods-besteffort-pod7b2d705e_4b26_45d5_a9b8_0a56f69ef4b1.slice - libcontainer container kubepods-besteffort-pod7b2d705e_4b26_45d5_a9b8_0a56f69ef4b1.slice. Jan 30 13:51:28.184600 systemd[1]: Created slice kubepods-besteffort-pod6962ed73_d508_4340_81ff_7f3201a82a70.slice - libcontainer container kubepods-besteffort-pod6962ed73_d508_4340_81ff_7f3201a82a70.slice. Jan 30 13:51:28.190125 systemd[1]: Created slice kubepods-besteffort-podd6c294f8_6ad1_443a_b8e8_6d3c60b2eab7.slice - libcontainer container kubepods-besteffort-podd6c294f8_6ad1_443a_b8e8_6d3c60b2eab7.slice. Jan 30 13:51:28.382370 systemd[1]: Created slice kubepods-besteffort-pod1654d24a_276c_4733_ab3c_b2a324f91922.slice - libcontainer container kubepods-besteffort-pod1654d24a_276c_4733_ab3c_b2a324f91922.slice. Jan 30 13:51:28.633221 containerd[1673]: time="2025-01-30T13:51:28.632742777Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mcmv9,Uid:1654d24a-276c-4733-ab3c-b2a324f91922,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:28.932629 containerd[1673]: time="2025-01-30T13:51:28.932145990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z8997,Uid:7c2a0430-bfea-48a8-b9a0-8ea183a3114a,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:28.936782 containerd[1673]: time="2025-01-30T13:51:28.936750300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-659c567d5c-h72ss,Uid:6962ed73-d508-4340-81ff-7f3201a82a70,Namespace:calico-system,Attempt:0,}" Jan 30 13:51:28.936986 containerd[1673]: time="2025-01-30T13:51:28.936754400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-vtx2w,Uid:d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:51:28.937226 containerd[1673]: time="2025-01-30T13:51:28.936899003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4g4nh,Uid:f7fbcb71-4682-4a9b-9734-5a668b2754b3,Namespace:kube-system,Attempt:0,}" Jan 30 13:51:28.954225 containerd[1673]: time="2025-01-30T13:51:28.954162913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-2m7hp,Uid:7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:51:30.627468 containerd[1673]: time="2025-01-30T13:51:30.627328562Z" level=info msg="shim disconnected" id=cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d namespace=k8s.io Jan 30 13:51:30.628599 containerd[1673]: time="2025-01-30T13:51:30.627429864Z" level=warning msg="cleaning up after shim disconnected" id=cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d namespace=k8s.io Jan 30 13:51:30.628599 containerd[1673]: time="2025-01-30T13:51:30.627568067Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:31.502698 containerd[1673]: time="2025-01-30T13:51:31.502648556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:51:32.470074 containerd[1673]: time="2025-01-30T13:51:32.469995737Z" level=error msg="Failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.470653 containerd[1673]: time="2025-01-30T13:51:32.470443947Z" level=error msg="encountered an error cleaning up failed sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.470653 containerd[1673]: time="2025-01-30T13:51:32.470512449Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mcmv9,Uid:1654d24a-276c-4733-ab3c-b2a324f91922,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.470850 kubelet[3113]: E0130 13:51:32.470792 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.471242 kubelet[3113]: E0130 13:51:32.470895 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:32.471242 kubelet[3113]: E0130 13:51:32.470926 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mcmv9" Jan 30 13:51:32.471242 kubelet[3113]: E0130 13:51:32.471027 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mcmv9_calico-system(1654d24a-276c-4733-ab3c-b2a324f91922)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mcmv9_calico-system(1654d24a-276c-4733-ab3c-b2a324f91922)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:32.503944 kubelet[3113]: I0130 13:51:32.503882 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:51:32.504853 containerd[1673]: time="2025-01-30T13:51:32.504739362Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:51:32.505173 containerd[1673]: time="2025-01-30T13:51:32.505018269Z" level=info msg="Ensure that sandbox 4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b in task-service has been cleanup successfully" Jan 30 13:51:32.531279 containerd[1673]: time="2025-01-30T13:51:32.531222391Z" level=error msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" failed" error="failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.531548 kubelet[3113]: E0130 13:51:32.531483 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:51:32.531631 kubelet[3113]: E0130 13:51:32.531573 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b"} Jan 30 13:51:32.531677 kubelet[3113]: E0130 13:51:32.531662 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:32.531777 kubelet[3113]: E0130 13:51:32.531699 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:32.578981 containerd[1673]: time="2025-01-30T13:51:32.578923425Z" level=error msg="Failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.579356 containerd[1673]: time="2025-01-30T13:51:32.579317934Z" level=error msg="encountered an error cleaning up failed sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.579468 containerd[1673]: time="2025-01-30T13:51:32.579395836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z8997,Uid:7c2a0430-bfea-48a8-b9a0-8ea183a3114a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.579778 kubelet[3113]: E0130 13:51:32.579723 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.579985 kubelet[3113]: E0130 13:51:32.579801 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-z8997" Jan 30 13:51:32.579985 kubelet[3113]: E0130 13:51:32.579828 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-z8997" Jan 30 13:51:32.579985 kubelet[3113]: E0130 13:51:32.579895 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-z8997_kube-system(7c2a0430-bfea-48a8-b9a0-8ea183a3114a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-z8997_kube-system(7c2a0430-bfea-48a8-b9a0-8ea183a3114a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z8997" podUID="7c2a0430-bfea-48a8-b9a0-8ea183a3114a" Jan 30 13:51:32.626632 containerd[1673]: time="2025-01-30T13:51:32.626571356Z" level=error msg="Failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.626948 containerd[1673]: time="2025-01-30T13:51:32.626913665Z" level=error msg="encountered an error cleaning up failed sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.627057 containerd[1673]: time="2025-01-30T13:51:32.626985466Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-2m7hp,Uid:7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.627336 kubelet[3113]: E0130 13:51:32.627293 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.627448 kubelet[3113]: E0130 13:51:32.627363 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" Jan 30 13:51:32.627448 kubelet[3113]: E0130 13:51:32.627394 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" Jan 30 13:51:32.627605 kubelet[3113]: E0130 13:51:32.627476 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df8c6b8fc-2m7hp_calico-apiserver(7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df8c6b8fc-2m7hp_calico-apiserver(7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" podUID="7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1" Jan 30 13:51:32.700631 containerd[1673]: time="2025-01-30T13:51:32.700573415Z" level=error msg="Failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.700979 containerd[1673]: time="2025-01-30T13:51:32.700945523Z" level=error msg="encountered an error cleaning up failed sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.701117 containerd[1673]: time="2025-01-30T13:51:32.701023625Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-vtx2w,Uid:d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.701351 kubelet[3113]: E0130 13:51:32.701312 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.701455 kubelet[3113]: E0130 13:51:32.701381 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" Jan 30 13:51:32.701455 kubelet[3113]: E0130 13:51:32.701409 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" Jan 30 13:51:32.701538 kubelet[3113]: E0130 13:51:32.701466 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5df8c6b8fc-vtx2w_calico-apiserver(d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5df8c6b8fc-vtx2w_calico-apiserver(d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" podUID="d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7" Jan 30 13:51:32.847916 containerd[1673]: time="2025-01-30T13:51:32.847843213Z" level=error msg="Failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.848581 containerd[1673]: time="2025-01-30T13:51:32.848285024Z" level=error msg="encountered an error cleaning up failed sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.848581 containerd[1673]: time="2025-01-30T13:51:32.848367326Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4g4nh,Uid:f7fbcb71-4682-4a9b-9734-5a668b2754b3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.848730 kubelet[3113]: E0130 13:51:32.848649 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.848791 kubelet[3113]: E0130 13:51:32.848725 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4g4nh" Jan 30 13:51:32.848791 kubelet[3113]: E0130 13:51:32.848750 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-6f6b679f8f-4g4nh" Jan 30 13:51:32.848876 kubelet[3113]: E0130 13:51:32.848800 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-6f6b679f8f-4g4nh_kube-system(f7fbcb71-4682-4a9b-9734-5a668b2754b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-6f6b679f8f-4g4nh_kube-system(f7fbcb71-4682-4a9b-9734-5a668b2754b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4g4nh" podUID="f7fbcb71-4682-4a9b-9734-5a668b2754b3" Jan 30 13:51:32.849266 containerd[1673]: time="2025-01-30T13:51:32.849185545Z" level=error msg="Failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.849855 containerd[1673]: time="2025-01-30T13:51:32.849723158Z" level=error msg="encountered an error cleaning up failed sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.849855 containerd[1673]: time="2025-01-30T13:51:32.849800160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-659c567d5c-h72ss,Uid:6962ed73-d508-4340-81ff-7f3201a82a70,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.850519 kubelet[3113]: E0130 13:51:32.850273 3113 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:32.850519 kubelet[3113]: E0130 13:51:32.850333 3113 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" Jan 30 13:51:32.850519 kubelet[3113]: E0130 13:51:32.850357 3113 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" Jan 30 13:51:32.850718 kubelet[3113]: E0130 13:51:32.850410 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-659c567d5c-h72ss_calico-system(6962ed73-d508-4340-81ff-7f3201a82a70)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-659c567d5c-h72ss_calico-system(6962ed73-d508-4340-81ff-7f3201a82a70)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" Jan 30 13:51:33.286197 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca-shm.mount: Deactivated successfully. Jan 30 13:51:33.286307 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6-shm.mount: Deactivated successfully. Jan 30 13:51:33.286383 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3-shm.mount: Deactivated successfully. Jan 30 13:51:33.286463 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b-shm.mount: Deactivated successfully. Jan 30 13:51:33.507785 kubelet[3113]: I0130 13:51:33.507436 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:51:33.509308 containerd[1673]: time="2025-01-30T13:51:33.509194824Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:51:33.509663 containerd[1673]: time="2025-01-30T13:51:33.509412030Z" level=info msg="Ensure that sandbox fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca in task-service has been cleanup successfully" Jan 30 13:51:33.511078 kubelet[3113]: I0130 13:51:33.510513 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:51:33.513666 containerd[1673]: time="2025-01-30T13:51:33.513545328Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:51:33.513779 containerd[1673]: time="2025-01-30T13:51:33.513750633Z" level=info msg="Ensure that sandbox da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3 in task-service has been cleanup successfully" Jan 30 13:51:33.515025 kubelet[3113]: I0130 13:51:33.514196 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:51:33.516483 containerd[1673]: time="2025-01-30T13:51:33.516453097Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:51:33.516708 containerd[1673]: time="2025-01-30T13:51:33.516684402Z" level=info msg="Ensure that sandbox 599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324 in task-service has been cleanup successfully" Jan 30 13:51:33.523614 kubelet[3113]: I0130 13:51:33.523395 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:51:33.524310 containerd[1673]: time="2025-01-30T13:51:33.524273583Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:51:33.525035 containerd[1673]: time="2025-01-30T13:51:33.524607891Z" level=info msg="Ensure that sandbox ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934 in task-service has been cleanup successfully" Jan 30 13:51:33.528790 kubelet[3113]: I0130 13:51:33.528732 3113 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:51:33.533370 containerd[1673]: time="2025-01-30T13:51:33.533340698Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:51:33.535025 containerd[1673]: time="2025-01-30T13:51:33.534580428Z" level=info msg="Ensure that sandbox d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6 in task-service has been cleanup successfully" Jan 30 13:51:33.634635 containerd[1673]: time="2025-01-30T13:51:33.634499701Z" level=error msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" failed" error="failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:33.635376 kubelet[3113]: E0130 13:51:33.635109 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:51:33.635376 kubelet[3113]: E0130 13:51:33.635181 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324"} Jan 30 13:51:33.635376 kubelet[3113]: E0130 13:51:33.635228 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:33.635376 kubelet[3113]: E0130 13:51:33.635270 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" podUID="7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1" Jan 30 13:51:33.637837 containerd[1673]: time="2025-01-30T13:51:33.635283320Z" level=error msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" failed" error="failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:33.637904 kubelet[3113]: E0130 13:51:33.635530 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:51:33.637904 kubelet[3113]: E0130 13:51:33.635580 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6"} Jan 30 13:51:33.637904 kubelet[3113]: E0130 13:51:33.635621 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:33.637904 kubelet[3113]: E0130 13:51:33.635651 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" podUID="d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7" Jan 30 13:51:33.639054 containerd[1673]: time="2025-01-30T13:51:33.639011108Z" level=error msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" failed" error="failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:33.639224 kubelet[3113]: E0130 13:51:33.639192 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:51:33.639361 kubelet[3113]: E0130 13:51:33.639338 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca"} Jan 30 13:51:33.639476 kubelet[3113]: E0130 13:51:33.639458 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:33.640101 kubelet[3113]: E0130 13:51:33.640050 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" Jan 30 13:51:33.645185 containerd[1673]: time="2025-01-30T13:51:33.645129154Z" level=error msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" failed" error="failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:33.645374 kubelet[3113]: E0130 13:51:33.645330 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:51:33.645487 kubelet[3113]: E0130 13:51:33.645469 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3"} Jan 30 13:51:33.645578 kubelet[3113]: E0130 13:51:33.645564 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:33.645728 kubelet[3113]: E0130 13:51:33.645705 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z8997" podUID="7c2a0430-bfea-48a8-b9a0-8ea183a3114a" Jan 30 13:51:33.647847 containerd[1673]: time="2025-01-30T13:51:33.647818218Z" level=error msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" failed" error="failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:33.648084 kubelet[3113]: E0130 13:51:33.648049 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:51:33.648167 kubelet[3113]: E0130 13:51:33.648097 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934"} Jan 30 13:51:33.648167 kubelet[3113]: E0130 13:51:33.648135 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:33.648258 kubelet[3113]: E0130 13:51:33.648163 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4g4nh" podUID="f7fbcb71-4682-4a9b-9734-5a668b2754b3" Jan 30 13:51:36.220701 kubelet[3113]: I0130 13:51:36.220315 3113 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:51:44.815872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3001229450.mount: Deactivated successfully. Jan 30 13:51:45.031411 containerd[1673]: time="2025-01-30T13:51:45.031333725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:45.078648 containerd[1673]: time="2025-01-30T13:51:45.078133457Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:51:45.123839 containerd[1673]: time="2025-01-30T13:51:45.123716259Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:45.171107 containerd[1673]: time="2025-01-30T13:51:45.170966301Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:51:45.172765 containerd[1673]: time="2025-01-30T13:51:45.172049828Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 13.66934587s" Jan 30 13:51:45.172765 containerd[1673]: time="2025-01-30T13:51:45.172104429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:51:45.182350 containerd[1673]: time="2025-01-30T13:51:45.182312576Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:51:45.377794 containerd[1673]: time="2025-01-30T13:51:45.377321991Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:51:45.377794 containerd[1673]: time="2025-01-30T13:51:45.377764902Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:51:45.429037 containerd[1673]: time="2025-01-30T13:51:45.428536230Z" level=error msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" failed" error="failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:45.429257 kubelet[3113]: E0130 13:51:45.428865 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:51:45.429257 kubelet[3113]: E0130 13:51:45.428942 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6"} Jan 30 13:51:45.429257 kubelet[3113]: E0130 13:51:45.428988 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:45.430694 kubelet[3113]: E0130 13:51:45.430615 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" podUID="d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7" Jan 30 13:51:45.435742 containerd[1673]: time="2025-01-30T13:51:45.435686702Z" level=error msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" failed" error="failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:45.436045 kubelet[3113]: E0130 13:51:45.435988 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:51:45.436125 kubelet[3113]: E0130 13:51:45.436058 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b"} Jan 30 13:51:45.436125 kubelet[3113]: E0130 13:51:45.436103 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:45.436252 kubelet[3113]: E0130 13:51:45.436134 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:51:45.631081 containerd[1673]: time="2025-01-30T13:51:45.630557214Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\"" Jan 30 13:51:45.633036 containerd[1673]: time="2025-01-30T13:51:45.631752443Z" level=info msg="StartContainer for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\"" Jan 30 13:51:45.666178 systemd[1]: Started cri-containerd-5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626.scope - libcontainer container 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626. Jan 30 13:51:45.697350 containerd[1673]: time="2025-01-30T13:51:45.697298328Z" level=info msg="StartContainer for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" returns successfully" Jan 30 13:51:45.905282 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:51:45.905418 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:51:45.929423 systemd[1]: cri-containerd-5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626.scope: Deactivated successfully. Jan 30 13:51:45.954378 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626-rootfs.mount: Deactivated successfully. Jan 30 13:51:46.376940 containerd[1673]: time="2025-01-30T13:51:46.376730857Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:51:46.377529 containerd[1673]: time="2025-01-30T13:51:46.376882661Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:51:46.579031 kubelet[3113]: I0130 13:51:46.577659 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-xfthv" podStartSLOduration=2.77938104 podStartE2EDuration="32.577636415s" podCreationTimestamp="2025-01-30 13:51:14 +0000 UTC" firstStartedPulling="2025-01-30 13:51:15.374980981 +0000 UTC m=+12.113917852" lastFinishedPulling="2025-01-30 13:51:45.173236456 +0000 UTC m=+41.912173227" observedRunningTime="2025-01-30 13:51:46.576635791 +0000 UTC m=+43.315572662" watchObservedRunningTime="2025-01-30 13:51:46.577636415 +0000 UTC m=+43.316573286" Jan 30 13:51:46.976239 containerd[1673]: time="2025-01-30T13:51:46.975896245Z" level=error msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" failed" error="failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:46.976427 kubelet[3113]: E0130 13:51:46.976150 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:51:46.976427 kubelet[3113]: E0130 13:51:46.976211 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324"} Jan 30 13:51:46.976427 kubelet[3113]: E0130 13:51:46.976250 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:46.976427 kubelet[3113]: E0130 13:51:46.976287 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" podUID="7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1" Jan 30 13:51:46.976712 kubelet[3113]: E0130 13:51:46.976604 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:51:46.976712 kubelet[3113]: E0130 13:51:46.976643 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca"} Jan 30 13:51:46.976712 kubelet[3113]: E0130 13:51:46.976676 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:46.976712 kubelet[3113]: E0130 13:51:46.976702 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" Jan 30 13:51:46.976929 containerd[1673]: time="2025-01-30T13:51:46.976451058Z" level=error msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" failed" error="failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:48.080149 containerd[1673]: time="2025-01-30T13:51:47.377292751Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:51:48.084790 containerd[1673]: time="2025-01-30T13:51:48.084723057Z" level=error msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" failed" error="failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:48.085062 kubelet[3113]: E0130 13:51:48.085014 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:51:48.085520 kubelet[3113]: E0130 13:51:48.085085 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3"} Jan 30 13:51:48.085520 kubelet[3113]: E0130 13:51:48.085131 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:48.085520 kubelet[3113]: E0130 13:51:48.085165 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z8997" podUID="7c2a0430-bfea-48a8-b9a0-8ea183a3114a" Jan 30 13:51:48.376172 containerd[1673]: time="2025-01-30T13:51:48.375640591Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:51:48.402751 containerd[1673]: time="2025-01-30T13:51:48.402696845Z" level=error msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" failed" error="failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:48.402955 kubelet[3113]: E0130 13:51:48.402912 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:51:48.403071 kubelet[3113]: E0130 13:51:48.402969 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934"} Jan 30 13:51:48.403071 kubelet[3113]: E0130 13:51:48.403026 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:48.403071 kubelet[3113]: E0130 13:51:48.403060 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4g4nh" podUID="f7fbcb71-4682-4a9b-9734-5a668b2754b3" Jan 30 13:51:48.563218 containerd[1673]: time="2025-01-30T13:51:48.563159225Z" level=error msg="get state for 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" error="context deadline exceeded: unknown" Jan 30 13:51:48.563838 containerd[1673]: time="2025-01-30T13:51:48.563401431Z" level=warning msg="unknown status" status=0 Jan 30 13:51:51.672303 containerd[1673]: time="2025-01-30T13:51:51.672205191Z" level=error msg="ttrpc: received message on inactive stream" stream=27 Jan 30 13:51:51.673897 containerd[1673]: time="2025-01-30T13:51:51.673377720Z" level=info msg="shim disconnected" id=5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 namespace=k8s.io Jan 30 13:51:51.673897 containerd[1673]: time="2025-01-30T13:51:51.673422821Z" level=warning msg="cleaning up after shim disconnected" id=5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 namespace=k8s.io Jan 30 13:51:51.673897 containerd[1673]: time="2025-01-30T13:51:51.673439621Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:51.674273 containerd[1673]: time="2025-01-30T13:51:51.674211939Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="failed to exec in container: failed to create exec \"051ef9693429bd84c9d25c0d2d0f64ca8169eea7fe1a59f632fc766fe962aa42\": cannot exec in a deleted state: unknown" Jan 30 13:51:51.676071 kubelet[3113]: E0130 13:51:51.674527 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = failed to exec in container: failed to create exec \"051ef9693429bd84c9d25c0d2d0f64ca8169eea7fe1a59f632fc766fe962aa42\": cannot exec in a deleted state: unknown" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.677266 containerd[1673]: time="2025-01-30T13:51:51.677210311Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.677568 kubelet[3113]: E0130 13:51:51.677510 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.679317 containerd[1673]: time="2025-01-30T13:51:51.679267860Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.679450 kubelet[3113]: E0130 13:51:51.679409 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.683170 containerd[1673]: time="2025-01-30T13:51:51.683089452Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.683322 kubelet[3113]: E0130 13:51:51.683285 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.684818 containerd[1673]: time="2025-01-30T13:51:51.684700090Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.685806 kubelet[3113]: E0130 13:51:51.685769 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.687160 containerd[1673]: time="2025-01-30T13:51:51.687125048Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.687887 kubelet[3113]: E0130 13:51:51.687396 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.689235 containerd[1673]: time="2025-01-30T13:51:51.688853089Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.689319 kubelet[3113]: E0130 13:51:51.689073 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.692041 containerd[1673]: time="2025-01-30T13:51:51.691970164Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.692517 kubelet[3113]: E0130 13:51:51.692263 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:51.694668 containerd[1673]: time="2025-01-30T13:51:51.694613527Z" level=error msg="ExecSync for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" Jan 30 13:51:51.694845 kubelet[3113]: E0130 13:51:51.694807 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626 not found: not found" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:52.577627 kubelet[3113]: I0130 13:51:52.577423 3113 scope.go:117] "RemoveContainer" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" Jan 30 13:51:52.582186 containerd[1673]: time="2025-01-30T13:51:52.581636028Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for container &ContainerMetadata{Name:calico-node,Attempt:1,}" Jan 30 13:51:52.884832 containerd[1673]: time="2025-01-30T13:51:52.884757274Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for &ContainerMetadata{Name:calico-node,Attempt:1,} returns container id \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\"" Jan 30 13:51:52.886488 containerd[1673]: time="2025-01-30T13:51:52.886153607Z" level=info msg="StartContainer for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\"" Jan 30 13:51:52.932201 systemd[1]: Started cri-containerd-9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5.scope - libcontainer container 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5. Jan 30 13:51:52.968476 containerd[1673]: time="2025-01-30T13:51:52.968415573Z" level=info msg="StartContainer for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\" returns successfully" Jan 30 13:51:53.027742 systemd[1]: cri-containerd-9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5.scope: Deactivated successfully. Jan 30 13:51:53.050421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5-rootfs.mount: Deactivated successfully. Jan 30 13:51:54.228463 containerd[1673]: time="2025-01-30T13:51:54.228366988Z" level=info msg="shim disconnected" id=9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 namespace=k8s.io Jan 30 13:51:54.228463 containerd[1673]: time="2025-01-30T13:51:54.228446590Z" level=warning msg="cleaning up after shim disconnected" id=9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 namespace=k8s.io Jan 30 13:51:54.228463 containerd[1673]: time="2025-01-30T13:51:54.228462590Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:51:54.229728 containerd[1673]: time="2025-01-30T13:51:54.229561517Z" level=error msg="ExecSync for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"31cc25d3f9d0b66b63bef97308719f5536b16a15570e34a8c2e2aa6a87b558ac\": task 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 not found: not found" Jan 30 13:51:54.230144 kubelet[3113]: E0130 13:51:54.230086 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to create exec \"31cc25d3f9d0b66b63bef97308719f5536b16a15570e34a8c2e2aa6a87b558ac\": task 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 not found: not found" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:54.232481 containerd[1673]: time="2025-01-30T13:51:54.232131878Z" level=error msg="ExecSync for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 not found: not found" Jan 30 13:51:54.232579 kubelet[3113]: E0130 13:51:54.232298 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 not found: not found" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:54.234146 containerd[1673]: time="2025-01-30T13:51:54.234111025Z" level=error msg="ExecSync for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\" failed" error="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 not found: not found" Jan 30 13:51:54.235474 kubelet[3113]: E0130 13:51:54.234370 3113 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = failed to exec in container: failed to load task: no running task found: task 9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5 not found: not found" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" cmd=["/bin/calico-node","-bird-ready","-felix-ready"] Jan 30 13:51:54.589661 kubelet[3113]: I0130 13:51:54.589529 3113 scope.go:117] "RemoveContainer" containerID="5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626" Jan 30 13:51:54.589971 kubelet[3113]: I0130 13:51:54.589942 3113 scope.go:117] "RemoveContainer" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" Jan 30 13:51:54.590201 kubelet[3113]: E0130 13:51:54.590172 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-xfthv_calico-system(13aefa74-e574-433c-97ec-9a7237917ee6)\"" pod="calico-system/calico-node-xfthv" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" Jan 30 13:51:54.592256 containerd[1673]: time="2025-01-30T13:51:54.592059281Z" level=info msg="RemoveContainer for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\"" Jan 30 13:51:54.600749 containerd[1673]: time="2025-01-30T13:51:54.600600185Z" level=info msg="RemoveContainer for \"5a55b4b67fc64b45ee342e4ec1880220651ca72597f02f7fb2b988f6da1fd626\" returns successfully" Jan 30 13:51:55.595399 kubelet[3113]: I0130 13:51:55.594673 3113 scope.go:117] "RemoveContainer" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" Jan 30 13:51:55.595399 kubelet[3113]: E0130 13:51:55.594901 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 10s restarting failed container=calico-node pod=calico-node-xfthv_calico-system(13aefa74-e574-433c-97ec-9a7237917ee6)\"" pod="calico-system/calico-node-xfthv" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" Jan 30 13:51:56.377724 containerd[1673]: time="2025-01-30T13:51:56.376332828Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:51:56.403962 containerd[1673]: time="2025-01-30T13:51:56.403898887Z" level=error msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" failed" error="failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:56.404270 kubelet[3113]: E0130 13:51:56.404205 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:51:56.404376 kubelet[3113]: E0130 13:51:56.404288 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6"} Jan 30 13:51:56.404376 kubelet[3113]: E0130 13:51:56.404337 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:56.404494 kubelet[3113]: E0130 13:51:56.404369 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" podUID="d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7" Jan 30 13:51:58.376079 containerd[1673]: time="2025-01-30T13:51:58.375702594Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:51:58.402643 containerd[1673]: time="2025-01-30T13:51:58.402586300Z" level=error msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" failed" error="failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:51:58.402919 kubelet[3113]: E0130 13:51:58.402869 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:51:58.403310 kubelet[3113]: E0130 13:51:58.402937 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b"} Jan 30 13:51:58.403310 kubelet[3113]: E0130 13:51:58.402988 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:51:58.403310 kubelet[3113]: E0130 13:51:58.403038 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:52:00.377508 containerd[1673]: time="2025-01-30T13:52:00.376165075Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:52:00.402933 containerd[1673]: time="2025-01-30T13:52:00.402879377Z" level=error msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" failed" error="failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:00.403238 kubelet[3113]: E0130 13:52:00.403179 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:52:00.403623 kubelet[3113]: E0130 13:52:00.403246 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3"} Jan 30 13:52:00.403623 kubelet[3113]: E0130 13:52:00.403296 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:00.403623 kubelet[3113]: E0130 13:52:00.403327 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z8997" podUID="7c2a0430-bfea-48a8-b9a0-8ea183a3114a" Jan 30 13:52:01.378420 containerd[1673]: time="2025-01-30T13:52:01.377641544Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:52:01.405150 containerd[1673]: time="2025-01-30T13:52:01.405091763Z" level=error msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" failed" error="failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:01.405681 kubelet[3113]: E0130 13:52:01.405324 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:52:01.405681 kubelet[3113]: E0130 13:52:01.405381 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324"} Jan 30 13:52:01.405681 kubelet[3113]: E0130 13:52:01.405430 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:01.405681 kubelet[3113]: E0130 13:52:01.405462 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" podUID="7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1" Jan 30 13:52:02.376387 containerd[1673]: time="2025-01-30T13:52:02.376252348Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:52:02.401189 containerd[1673]: time="2025-01-30T13:52:02.401131109Z" level=error msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" failed" error="failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:02.401662 kubelet[3113]: E0130 13:52:02.401373 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:02.401662 kubelet[3113]: E0130 13:52:02.401435 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca"} Jan 30 13:52:02.401662 kubelet[3113]: E0130 13:52:02.401481 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:02.401662 kubelet[3113]: E0130 13:52:02.401520 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" Jan 30 13:52:03.377558 containerd[1673]: time="2025-01-30T13:52:03.376793896Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:52:03.404964 containerd[1673]: time="2025-01-30T13:52:03.404903430Z" level=error msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" failed" error="failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:03.405558 kubelet[3113]: E0130 13:52:03.405125 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:52:03.405558 kubelet[3113]: E0130 13:52:03.405186 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934"} Jan 30 13:52:03.405558 kubelet[3113]: E0130 13:52:03.405236 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:03.405558 kubelet[3113]: E0130 13:52:03.405263 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4g4nh" podUID="f7fbcb71-4682-4a9b-9734-5a668b2754b3" Jan 30 13:52:08.376285 kubelet[3113]: I0130 13:52:08.376236 3113 scope.go:117] "RemoveContainer" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" Jan 30 13:52:08.379381 containerd[1673]: time="2025-01-30T13:52:08.379099734Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for container &ContainerMetadata{Name:calico-node,Attempt:2,}" Jan 30 13:52:08.575330 containerd[1673]: time="2025-01-30T13:52:08.575202279Z" level=info msg="CreateContainer within sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" for &ContainerMetadata{Name:calico-node,Attempt:2,} returns container id \"070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66\"" Jan 30 13:52:08.577699 containerd[1673]: time="2025-01-30T13:52:08.576071900Z" level=info msg="StartContainer for \"070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66\"" Jan 30 13:52:08.616203 systemd[1]: Started cri-containerd-070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66.scope - libcontainer container 070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66. Jan 30 13:52:08.654777 containerd[1673]: time="2025-01-30T13:52:08.654657161Z" level=info msg="StartContainer for \"070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66\" returns successfully" Jan 30 13:52:08.708379 systemd[1]: cri-containerd-070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66.scope: Deactivated successfully. Jan 30 13:52:08.732223 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66-rootfs.mount: Deactivated successfully. Jan 30 13:52:09.423876 containerd[1673]: time="2025-01-30T13:52:09.377730089Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:52:09.425978 containerd[1673]: time="2025-01-30T13:52:09.425927031Z" level=error msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" failed" error="failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:09.426319 kubelet[3113]: E0130 13:52:09.426258 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:52:09.426710 kubelet[3113]: E0130 13:52:09.426345 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6"} Jan 30 13:52:09.426710 kubelet[3113]: E0130 13:52:09.426393 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:09.426710 kubelet[3113]: E0130 13:52:09.426440 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" podUID="d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7" Jan 30 13:52:09.433022 containerd[1673]: time="2025-01-30T13:52:09.432954697Z" level=info msg="shim disconnected" id=070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66 namespace=k8s.io Jan 30 13:52:09.433105 containerd[1673]: time="2025-01-30T13:52:09.433037399Z" level=warning msg="cleaning up after shim disconnected" id=070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66 namespace=k8s.io Jan 30 13:52:09.433105 containerd[1673]: time="2025-01-30T13:52:09.433054499Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:09.631813 kubelet[3113]: I0130 13:52:09.631774 3113 scope.go:117] "RemoveContainer" containerID="9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5" Jan 30 13:52:09.632964 kubelet[3113]: I0130 13:52:09.632742 3113 scope.go:117] "RemoveContainer" containerID="070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66" Jan 30 13:52:09.633472 kubelet[3113]: E0130 13:52:09.633239 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-xfthv_calico-system(13aefa74-e574-433c-97ec-9a7237917ee6)\"" pod="calico-system/calico-node-xfthv" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" Jan 30 13:52:09.634835 containerd[1673]: time="2025-01-30T13:52:09.634417669Z" level=info msg="RemoveContainer for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\"" Jan 30 13:52:09.644232 containerd[1673]: time="2025-01-30T13:52:09.644186400Z" level=info msg="RemoveContainer for \"9233b5bbc2393dd4b37aed2822b7ee8fb20b87c4d1ff7fc3031f894a3b0cd8b5\" returns successfully" Jan 30 13:52:10.637202 kubelet[3113]: I0130 13:52:10.636381 3113 scope.go:117] "RemoveContainer" containerID="070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66" Jan 30 13:52:10.637202 kubelet[3113]: E0130 13:52:10.636590 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with CrashLoopBackOff: \"back-off 20s restarting failed container=calico-node pod=calico-node-xfthv_calico-system(13aefa74-e574-433c-97ec-9a7237917ee6)\"" pod="calico-system/calico-node-xfthv" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" Jan 30 13:52:12.376699 containerd[1673]: time="2025-01-30T13:52:12.376523022Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:52:12.378545 containerd[1673]: time="2025-01-30T13:52:12.377106735Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:52:12.381350 containerd[1673]: time="2025-01-30T13:52:12.380934626Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:52:12.446279 containerd[1673]: time="2025-01-30T13:52:12.446117070Z" level=error msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" failed" error="failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:12.447201 kubelet[3113]: E0130 13:52:12.446616 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:52:12.447201 kubelet[3113]: E0130 13:52:12.446684 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3"} Jan 30 13:52:12.447201 kubelet[3113]: E0130 13:52:12.446737 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:12.447201 kubelet[3113]: E0130 13:52:12.446769 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7c2a0430-bfea-48a8-b9a0-8ea183a3114a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-z8997" podUID="7c2a0430-bfea-48a8-b9a0-8ea183a3114a" Jan 30 13:52:12.447834 containerd[1673]: time="2025-01-30T13:52:12.447802610Z" level=error msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" failed" error="failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:12.448048 kubelet[3113]: E0130 13:52:12.447989 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:52:12.448190 kubelet[3113]: E0130 13:52:12.448063 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b"} Jan 30 13:52:12.448190 kubelet[3113]: E0130 13:52:12.448101 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:12.448190 kubelet[3113]: E0130 13:52:12.448128 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1654d24a-276c-4733-ab3c-b2a324f91922\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mcmv9" podUID="1654d24a-276c-4733-ab3c-b2a324f91922" Jan 30 13:52:12.448903 containerd[1673]: time="2025-01-30T13:52:12.448866235Z" level=error msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" failed" error="failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:12.449141 kubelet[3113]: E0130 13:52:12.449104 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:52:12.449235 kubelet[3113]: E0130 13:52:12.449147 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324"} Jan 30 13:52:12.449235 kubelet[3113]: E0130 13:52:12.449179 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:12.449235 kubelet[3113]: E0130 13:52:12.449209 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" podUID="7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1" Jan 30 13:52:15.379032 containerd[1673]: time="2025-01-30T13:52:15.376571848Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:52:15.417039 containerd[1673]: time="2025-01-30T13:52:15.416958741Z" level=error msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" failed" error="failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:15.417904 kubelet[3113]: E0130 13:52:15.417535 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:15.417904 kubelet[3113]: E0130 13:52:15.417642 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca"} Jan 30 13:52:15.417904 kubelet[3113]: E0130 13:52:15.417729 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:15.417904 kubelet[3113]: E0130 13:52:15.417789 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" Jan 30 13:52:15.510964 containerd[1673]: time="2025-01-30T13:52:15.510767454Z" level=info msg="StopContainer for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" with timeout 300 (s)" Jan 30 13:52:15.513264 containerd[1673]: time="2025-01-30T13:52:15.513207378Z" level=info msg="Stop container \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" with signal terminated" Jan 30 13:52:15.554168 systemd[1]: cri-containerd-064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331.scope: Deactivated successfully. Jan 30 13:52:15.577959 containerd[1673]: time="2025-01-30T13:52:15.577913408Z" level=info msg="StopPodSandbox for \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\"" Jan 30 13:52:15.578435 containerd[1673]: time="2025-01-30T13:52:15.577978408Z" level=info msg="Container to stop \"cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:15.578435 containerd[1673]: time="2025-01-30T13:52:15.577999009Z" level=info msg="Container to stop \"070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:15.578435 containerd[1673]: time="2025-01-30T13:52:15.578031909Z" level=info msg="Container to stop \"584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:15.584071 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e-shm.mount: Deactivated successfully. Jan 30 13:52:15.611238 systemd[1]: cri-containerd-4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e.scope: Deactivated successfully. Jan 30 13:52:15.623269 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331-rootfs.mount: Deactivated successfully. Jan 30 13:52:15.656799 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e-rootfs.mount: Deactivated successfully. Jan 30 13:52:15.657643 containerd[1673]: time="2025-01-30T13:52:15.657573183Z" level=info msg="shim disconnected" id=4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e namespace=k8s.io Jan 30 13:52:15.657783 containerd[1673]: time="2025-01-30T13:52:15.657653884Z" level=warning msg="cleaning up after shim disconnected" id=4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e namespace=k8s.io Jan 30 13:52:15.657783 containerd[1673]: time="2025-01-30T13:52:15.657665184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:15.660384 containerd[1673]: time="2025-01-30T13:52:15.660218809Z" level=info msg="shim disconnected" id=064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331 namespace=k8s.io Jan 30 13:52:15.660384 containerd[1673]: time="2025-01-30T13:52:15.660277309Z" level=warning msg="cleaning up after shim disconnected" id=064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331 namespace=k8s.io Jan 30 13:52:15.660384 containerd[1673]: time="2025-01-30T13:52:15.660287109Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:15.688827 containerd[1673]: time="2025-01-30T13:52:15.688471884Z" level=info msg="TearDown network for sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" successfully" Jan 30 13:52:15.688827 containerd[1673]: time="2025-01-30T13:52:15.688515084Z" level=info msg="StopPodSandbox for \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" returns successfully" Jan 30 13:52:15.690167 containerd[1673]: time="2025-01-30T13:52:15.690114400Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:52:15Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:52:15.696323 containerd[1673]: time="2025-01-30T13:52:15.696290660Z" level=info msg="StopContainer for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" returns successfully" Jan 30 13:52:15.697055 containerd[1673]: time="2025-01-30T13:52:15.697030167Z" level=info msg="StopPodSandbox for \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\"" Jan 30 13:52:15.697888 containerd[1673]: time="2025-01-30T13:52:15.697218369Z" level=info msg="Container to stop \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:52:15.702626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b-shm.mount: Deactivated successfully. Jan 30 13:52:15.713028 systemd[1]: cri-containerd-d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b.scope: Deactivated successfully. Jan 30 13:52:15.740270 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b-rootfs.mount: Deactivated successfully. Jan 30 13:52:15.752681 containerd[1673]: time="2025-01-30T13:52:15.752395406Z" level=info msg="shim disconnected" id=d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b namespace=k8s.io Jan 30 13:52:15.753238 containerd[1673]: time="2025-01-30T13:52:15.752655508Z" level=warning msg="cleaning up after shim disconnected" id=d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b namespace=k8s.io Jan 30 13:52:15.753238 containerd[1673]: time="2025-01-30T13:52:15.753067412Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:15.773768 kubelet[3113]: I0130 13:52:15.773723 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-net-dir\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.773768 kubelet[3113]: I0130 13:52:15.773774 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-log-dir\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775209 kubelet[3113]: I0130 13:52:15.773794 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-lib-modules\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775209 kubelet[3113]: I0130 13:52:15.773815 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-bin-dir\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775209 kubelet[3113]: I0130 13:52:15.773849 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13aefa74-e574-433c-97ec-9a7237917ee6-node-certs\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775209 kubelet[3113]: I0130 13:52:15.773873 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-flexvol-driver-host\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775209 kubelet[3113]: I0130 13:52:15.773891 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-policysync\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775209 kubelet[3113]: I0130 13:52:15.773921 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sx57z\" (UniqueName: \"kubernetes.io/projected/13aefa74-e574-433c-97ec-9a7237917ee6-kube-api-access-sx57z\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775466 kubelet[3113]: I0130 13:52:15.773942 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-run-calico\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775466 kubelet[3113]: I0130 13:52:15.773964 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-xtables-lock\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775466 kubelet[3113]: I0130 13:52:15.773989 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-lib-calico\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.775466 kubelet[3113]: I0130 13:52:15.774035 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13aefa74-e574-433c-97ec-9a7237917ee6-tigera-ca-bundle\") pod \"13aefa74-e574-433c-97ec-9a7237917ee6\" (UID: \"13aefa74-e574-433c-97ec-9a7237917ee6\") " Jan 30 13:52:15.779366 kubelet[3113]: I0130 13:52:15.777208 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.779366 kubelet[3113]: I0130 13:52:15.777278 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.779366 kubelet[3113]: I0130 13:52:15.777301 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.779366 kubelet[3113]: I0130 13:52:15.777318 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.779366 kubelet[3113]: I0130 13:52:15.777337 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.783040 containerd[1673]: time="2025-01-30T13:52:15.782346997Z" level=info msg="TearDown network for sandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" successfully" Jan 30 13:52:15.783040 containerd[1673]: time="2025-01-30T13:52:15.782391798Z" level=info msg="StopPodSandbox for \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" returns successfully" Jan 30 13:52:15.783500 kubelet[3113]: I0130 13:52:15.783469 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-policysync" (OuterVolumeSpecName: "policysync") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.786020 kubelet[3113]: E0130 13:52:15.785972 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="install-cni" Jan 30 13:52:15.786107 kubelet[3113]: E0130 13:52:15.786053 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="calico-node" Jan 30 13:52:15.786107 kubelet[3113]: E0130 13:52:15.786067 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="calico-node" Jan 30 13:52:15.786107 kubelet[3113]: E0130 13:52:15.786079 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="flexvol-driver" Jan 30 13:52:15.786276 kubelet[3113]: I0130 13:52:15.786127 3113 memory_manager.go:354] "RemoveStaleState removing state" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="calico-node" Jan 30 13:52:15.786276 kubelet[3113]: I0130 13:52:15.786138 3113 memory_manager.go:354] "RemoveStaleState removing state" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="calico-node" Jan 30 13:52:15.786276 kubelet[3113]: I0130 13:52:15.786146 3113 memory_manager.go:354] "RemoveStaleState removing state" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="calico-node" Jan 30 13:52:15.786276 kubelet[3113]: E0130 13:52:15.786180 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" containerName="calico-node" Jan 30 13:52:15.790212 kubelet[3113]: I0130 13:52:15.790180 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/13aefa74-e574-433c-97ec-9a7237917ee6-node-certs" (OuterVolumeSpecName: "node-certs") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:52:15.792161 kubelet[3113]: I0130 13:52:15.792128 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13aefa74-e574-433c-97ec-9a7237917ee6-kube-api-access-sx57z" (OuterVolumeSpecName: "kube-api-access-sx57z") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "kube-api-access-sx57z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:15.792289 kubelet[3113]: I0130 13:52:15.792198 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.792289 kubelet[3113]: I0130 13:52:15.792229 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.792289 kubelet[3113]: I0130 13:52:15.792251 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 30 13:52:15.792799 kubelet[3113]: I0130 13:52:15.792728 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/13aefa74-e574-433c-97ec-9a7237917ee6-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "13aefa74-e574-433c-97ec-9a7237917ee6" (UID: "13aefa74-e574-433c-97ec-9a7237917ee6"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:15.803729 systemd[1]: Created slice kubepods-besteffort-podf32e475a_d3b3_478a_87c3_7b556b937295.slice - libcontainer container kubepods-besteffort-podf32e475a_d3b3_478a_87c3_7b556b937295.slice. Jan 30 13:52:15.828739 containerd[1673]: time="2025-01-30T13:52:15.827871040Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:52:15.878034 kubelet[3113]: I0130 13:52:15.875027 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15229b91-3171-4a6d-bc5e-9d7f16114666-tigera-ca-bundle\") pod \"15229b91-3171-4a6d-bc5e-9d7f16114666\" (UID: \"15229b91-3171-4a6d-bc5e-9d7f16114666\") " Jan 30 13:52:15.878034 kubelet[3113]: I0130 13:52:15.875086 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-svmg8\" (UniqueName: \"kubernetes.io/projected/15229b91-3171-4a6d-bc5e-9d7f16114666-kube-api-access-svmg8\") pod \"15229b91-3171-4a6d-bc5e-9d7f16114666\" (UID: \"15229b91-3171-4a6d-bc5e-9d7f16114666\") " Jan 30 13:52:15.878034 kubelet[3113]: I0130 13:52:15.875112 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/15229b91-3171-4a6d-bc5e-9d7f16114666-typha-certs\") pod \"15229b91-3171-4a6d-bc5e-9d7f16114666\" (UID: \"15229b91-3171-4a6d-bc5e-9d7f16114666\") " Jan 30 13:52:15.878034 kubelet[3113]: I0130 13:52:15.875168 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f32e475a-d3b3-478a-87c3-7b556b937295-tigera-ca-bundle\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878034 kubelet[3113]: I0130 13:52:15.875200 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt76z\" (UniqueName: \"kubernetes.io/projected/f32e475a-d3b3-478a-87c3-7b556b937295-kube-api-access-qt76z\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878034 kubelet[3113]: I0130 13:52:15.875221 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-policysync\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878456 kubelet[3113]: I0130 13:52:15.875245 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-var-run-calico\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878456 kubelet[3113]: I0130 13:52:15.875269 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-cni-bin-dir\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878456 kubelet[3113]: I0130 13:52:15.875295 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-xtables-lock\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878456 kubelet[3113]: I0130 13:52:15.875316 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-var-lib-calico\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878456 kubelet[3113]: I0130 13:52:15.875341 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-cni-net-dir\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878666 kubelet[3113]: I0130 13:52:15.875370 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f32e475a-d3b3-478a-87c3-7b556b937295-node-certs\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878666 kubelet[3113]: I0130 13:52:15.875399 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-cni-log-dir\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878666 kubelet[3113]: I0130 13:52:15.875423 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-flexvol-driver-host\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878666 kubelet[3113]: I0130 13:52:15.875445 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f32e475a-d3b3-478a-87c3-7b556b937295-lib-modules\") pod \"calico-node-r9696\" (UID: \"f32e475a-d3b3-478a-87c3-7b556b937295\") " pod="calico-system/calico-node-r9696" Jan 30 13:52:15.878666 kubelet[3113]: I0130 13:52:15.875475 3113 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-lib-calico\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878666 kubelet[3113]: I0130 13:52:15.875490 3113 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/13aefa74-e574-433c-97ec-9a7237917ee6-tigera-ca-bundle\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875503 3113 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-bin-dir\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875516 3113 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-net-dir\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875529 3113 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-cni-log-dir\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875542 3113 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-lib-modules\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875554 3113 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/13aefa74-e574-433c-97ec-9a7237917ee6-node-certs\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875567 3113 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-flexvol-driver-host\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875579 3113 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-policysync\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.878909 kubelet[3113]: I0130 13:52:15.875591 3113 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-xtables-lock\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.880285 kubelet[3113]: I0130 13:52:15.875604 3113 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-sx57z\" (UniqueName: \"kubernetes.io/projected/13aefa74-e574-433c-97ec-9a7237917ee6-kube-api-access-sx57z\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.880285 kubelet[3113]: I0130 13:52:15.875616 3113 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/13aefa74-e574-433c-97ec-9a7237917ee6-var-run-calico\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.880371 containerd[1673]: time="2025-01-30T13:52:15.880185249Z" level=error msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" failed" error="failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:15.881798 kubelet[3113]: I0130 13:52:15.880769 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/15229b91-3171-4a6d-bc5e-9d7f16114666-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "15229b91-3171-4a6d-bc5e-9d7f16114666" (UID: "15229b91-3171-4a6d-bc5e-9d7f16114666"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:15.885121 kubelet[3113]: I0130 13:52:15.883964 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15229b91-3171-4a6d-bc5e-9d7f16114666-kube-api-access-svmg8" (OuterVolumeSpecName: "kube-api-access-svmg8") pod "15229b91-3171-4a6d-bc5e-9d7f16114666" (UID: "15229b91-3171-4a6d-bc5e-9d7f16114666"). InnerVolumeSpecName "kube-api-access-svmg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:15.885312 kubelet[3113]: E0130 13:52:15.885246 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:15.886300 kubelet[3113]: E0130 13:52:15.886223 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca"} Jan 30 13:52:15.887205 kubelet[3113]: E0130 13:52:15.887174 3113 kubelet.go:2027] "Unhandled Error" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" logger="UnhandledError" Jan 30 13:52:15.887465 kubelet[3113]: I0130 13:52:15.887370 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/15229b91-3171-4a6d-bc5e-9d7f16114666-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "15229b91-3171-4a6d-bc5e-9d7f16114666" (UID: "15229b91-3171-4a6d-bc5e-9d7f16114666"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:52:15.889092 kubelet[3113]: E0130 13:52:15.889056 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"6962ed73-d508-4340-81ff-7f3201a82a70\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-659c567d5c-h72ss" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" Jan 30 13:52:15.979527 kubelet[3113]: I0130 13:52:15.976924 3113 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/15229b91-3171-4a6d-bc5e-9d7f16114666-tigera-ca-bundle\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.979527 kubelet[3113]: I0130 13:52:15.976974 3113 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-svmg8\" (UniqueName: \"kubernetes.io/projected/15229b91-3171-4a6d-bc5e-9d7f16114666-kube-api-access-svmg8\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:15.979527 kubelet[3113]: I0130 13:52:15.976991 3113 reconciler_common.go:288] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/15229b91-3171-4a6d-bc5e-9d7f16114666-typha-certs\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:16.113696 containerd[1673]: time="2025-01-30T13:52:16.113621521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r9696,Uid:f32e475a-d3b3-478a-87c3-7b556b937295,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:16.154119 containerd[1673]: time="2025-01-30T13:52:16.153927714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:16.154360 containerd[1673]: time="2025-01-30T13:52:16.154094515Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:16.154879 containerd[1673]: time="2025-01-30T13:52:16.154779222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:16.155078 containerd[1673]: time="2025-01-30T13:52:16.154889223Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:16.173208 systemd[1]: Started cri-containerd-c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb.scope - libcontainer container c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb. Jan 30 13:52:16.203112 containerd[1673]: time="2025-01-30T13:52:16.203058792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-r9696,Uid:f32e475a-d3b3-478a-87c3-7b556b937295,Namespace:calico-system,Attempt:0,} returns sandbox id \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\"" Jan 30 13:52:16.206372 containerd[1673]: time="2025-01-30T13:52:16.206158022Z" level=info msg="CreateContainer within sandbox \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:52:16.248164 containerd[1673]: time="2025-01-30T13:52:16.247547925Z" level=info msg="CreateContainer within sandbox \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983\"" Jan 30 13:52:16.250059 containerd[1673]: time="2025-01-30T13:52:16.248434233Z" level=info msg="StartContainer for \"b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983\"" Jan 30 13:52:16.276181 systemd[1]: Started cri-containerd-b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983.scope - libcontainer container b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983. Jan 30 13:52:16.313825 containerd[1673]: time="2025-01-30T13:52:16.313688968Z" level=info msg="StartContainer for \"b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983\" returns successfully" Jan 30 13:52:16.327343 systemd[1]: cri-containerd-b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983.scope: Deactivated successfully. Jan 30 13:52:16.376449 containerd[1673]: time="2025-01-30T13:52:16.376192277Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:52:16.408501 containerd[1673]: time="2025-01-30T13:52:16.408436390Z" level=error msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" failed" error="failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:52:16.409165 kubelet[3113]: E0130 13:52:16.408704 3113 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:52:16.409165 kubelet[3113]: E0130 13:52:16.408776 3113 kuberuntime_manager.go:1477] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934"} Jan 30 13:52:16.409165 kubelet[3113]: E0130 13:52:16.408829 3113 kuberuntime_manager.go:1077] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:52:16.409165 kubelet[3113]: E0130 13:52:16.408866 3113 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7fbcb71-4682-4a9b-9734-5a668b2754b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-6f6b679f8f-4g4nh" podUID="f7fbcb71-4682-4a9b-9734-5a668b2754b3" Jan 30 13:52:16.438713 containerd[1673]: time="2025-01-30T13:52:16.438630884Z" level=info msg="shim disconnected" id=b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983 namespace=k8s.io Jan 30 13:52:16.438713 containerd[1673]: time="2025-01-30T13:52:16.438709785Z" level=warning msg="cleaning up after shim disconnected" id=b8e5980f16f57641f9fba5ae8b3c45ef0e83192a2b3439721e47ea9394ddd983 namespace=k8s.io Jan 30 13:52:16.438713 containerd[1673]: time="2025-01-30T13:52:16.438721285Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:16.591246 systemd[1]: var-lib-kubelet-pods-13aefa74\x2de574\x2d433c\x2d97ec\x2d9a7237917ee6-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 30 13:52:16.591943 systemd[1]: var-lib-kubelet-pods-15229b91\x2d3171\x2d4a6d\x2dbc5e\x2d9d7f16114666-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 30 13:52:16.592267 systemd[1]: var-lib-kubelet-pods-13aefa74\x2de574\x2d433c\x2d97ec\x2d9a7237917ee6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsx57z.mount: Deactivated successfully. Jan 30 13:52:16.592455 systemd[1]: var-lib-kubelet-pods-13aefa74\x2de574\x2d433c\x2d97ec\x2d9a7237917ee6-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 30 13:52:16.592654 systemd[1]: var-lib-kubelet-pods-15229b91\x2d3171\x2d4a6d\x2dbc5e\x2d9d7f16114666-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsvmg8.mount: Deactivated successfully. Jan 30 13:52:16.592967 systemd[1]: var-lib-kubelet-pods-15229b91\x2d3171\x2d4a6d\x2dbc5e\x2d9d7f16114666-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 30 13:52:16.668502 containerd[1673]: time="2025-01-30T13:52:16.668333320Z" level=info msg="CreateContainer within sandbox \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:52:16.670338 kubelet[3113]: I0130 13:52:16.669944 3113 scope.go:117] "RemoveContainer" containerID="070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66" Jan 30 13:52:16.676748 containerd[1673]: time="2025-01-30T13:52:16.676223697Z" level=info msg="RemoveContainer for \"070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66\"" Jan 30 13:52:16.686897 systemd[1]: Removed slice kubepods-besteffort-pod13aefa74_e574_433c_97ec_9a7237917ee6.slice - libcontainer container kubepods-besteffort-pod13aefa74_e574_433c_97ec_9a7237917ee6.slice. Jan 30 13:52:16.690574 containerd[1673]: time="2025-01-30T13:52:16.689478426Z" level=info msg="RemoveContainer for \"070a96c31c77dfe3d1312dfaa6fc42e4e506099d2482b8a03c7fca84044f0c66\" returns successfully" Jan 30 13:52:16.690698 kubelet[3113]: I0130 13:52:16.690486 3113 scope.go:117] "RemoveContainer" containerID="cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d" Jan 30 13:52:16.694765 systemd[1]: Removed slice kubepods-besteffort-pod15229b91_3171_4a6d_bc5e_9d7f16114666.slice - libcontainer container kubepods-besteffort-pod15229b91_3171_4a6d_bc5e_9d7f16114666.slice. Jan 30 13:52:16.700949 kubelet[3113]: E0130 13:52:16.700910 3113 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="15229b91-3171-4a6d-bc5e-9d7f16114666" containerName="calico-typha" Jan 30 13:52:16.701092 kubelet[3113]: I0130 13:52:16.700980 3113 memory_manager.go:354] "RemoveStaleState removing state" podUID="15229b91-3171-4a6d-bc5e-9d7f16114666" containerName="calico-typha" Jan 30 13:52:16.712101 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2728891158.mount: Deactivated successfully. Jan 30 13:52:16.716827 containerd[1673]: time="2025-01-30T13:52:16.716738391Z" level=info msg="RemoveContainer for \"cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d\"" Jan 30 13:52:16.724377 containerd[1673]: time="2025-01-30T13:52:16.724073762Z" level=info msg="CreateContainer within sandbox \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d\"" Jan 30 13:52:16.724928 containerd[1673]: time="2025-01-30T13:52:16.724880970Z" level=info msg="StartContainer for \"5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d\"" Jan 30 13:52:16.727950 systemd[1]: Created slice kubepods-besteffort-poda543be2e_641d_42ff_9b77_b91f01d9bab1.slice - libcontainer container kubepods-besteffort-poda543be2e_641d_42ff_9b77_b91f01d9bab1.slice. Jan 30 13:52:16.741440 containerd[1673]: time="2025-01-30T13:52:16.741386931Z" level=info msg="RemoveContainer for \"cd01e6b23b1ba8c66b73c4814989fdc4ac10b58e75e79266af21c6ea1dca324d\" returns successfully" Jan 30 13:52:16.741889 kubelet[3113]: I0130 13:52:16.741762 3113 scope.go:117] "RemoveContainer" containerID="584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614" Jan 30 13:52:16.745497 containerd[1673]: time="2025-01-30T13:52:16.745140867Z" level=info msg="RemoveContainer for \"584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614\"" Jan 30 13:52:16.753027 containerd[1673]: time="2025-01-30T13:52:16.751360128Z" level=info msg="RemoveContainer for \"584ea10bfc97f2eec4a1e76eb36a2be4215a2e6ee63ac0eb226635ca9e336614\" returns successfully" Jan 30 13:52:16.753440 kubelet[3113]: I0130 13:52:16.753422 3113 scope.go:117] "RemoveContainer" containerID="064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331" Jan 30 13:52:16.754773 containerd[1673]: time="2025-01-30T13:52:16.754690160Z" level=info msg="RemoveContainer for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\"" Jan 30 13:52:16.773052 containerd[1673]: time="2025-01-30T13:52:16.766126871Z" level=info msg="RemoveContainer for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" returns successfully" Jan 30 13:52:16.777194 kubelet[3113]: I0130 13:52:16.777064 3113 scope.go:117] "RemoveContainer" containerID="064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331" Jan 30 13:52:16.778046 containerd[1673]: time="2025-01-30T13:52:16.777533783Z" level=error msg="ContainerStatus for \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\": not found" Jan 30 13:52:16.778293 kubelet[3113]: E0130 13:52:16.778214 3113 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\": not found" containerID="064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331" Jan 30 13:52:16.778293 kubelet[3113]: I0130 13:52:16.778263 3113 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331"} err="failed to get container status \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\": rpc error: code = NotFound desc = an error occurred when try to find container \"064c5da044ee776c662601c028fc4bf0ed84162d9a41c64e30f94302fc127331\": not found" Jan 30 13:52:16.785199 kubelet[3113]: I0130 13:52:16.785170 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mg8\" (UniqueName: \"kubernetes.io/projected/a543be2e-641d-42ff-9b77-b91f01d9bab1-kube-api-access-46mg8\") pod \"calico-typha-779c9796d4-x2zlh\" (UID: \"a543be2e-641d-42ff-9b77-b91f01d9bab1\") " pod="calico-system/calico-typha-779c9796d4-x2zlh" Jan 30 13:52:16.785454 kubelet[3113]: I0130 13:52:16.785433 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a543be2e-641d-42ff-9b77-b91f01d9bab1-tigera-ca-bundle\") pod \"calico-typha-779c9796d4-x2zlh\" (UID: \"a543be2e-641d-42ff-9b77-b91f01d9bab1\") " pod="calico-system/calico-typha-779c9796d4-x2zlh" Jan 30 13:52:16.785653 kubelet[3113]: I0130 13:52:16.785602 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/a543be2e-641d-42ff-9b77-b91f01d9bab1-typha-certs\") pod \"calico-typha-779c9796d4-x2zlh\" (UID: \"a543be2e-641d-42ff-9b77-b91f01d9bab1\") " pod="calico-system/calico-typha-779c9796d4-x2zlh" Jan 30 13:52:16.818222 systemd[1]: Started cri-containerd-5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d.scope - libcontainer container 5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d. Jan 30 13:52:16.965397 containerd[1673]: time="2025-01-30T13:52:16.964994987Z" level=info msg="StartContainer for \"5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d\" returns successfully" Jan 30 13:52:17.034461 containerd[1673]: time="2025-01-30T13:52:17.034390630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-779c9796d4-x2zlh,Uid:a543be2e-641d-42ff-9b77-b91f01d9bab1,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:17.086571 containerd[1673]: time="2025-01-30T13:52:17.084696321Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:17.094462 containerd[1673]: time="2025-01-30T13:52:17.091567183Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:17.094462 containerd[1673]: time="2025-01-30T13:52:17.091605484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.094462 containerd[1673]: time="2025-01-30T13:52:17.091737187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:17.121234 systemd[1]: Started cri-containerd-1628cfd250c18a6cc2707f1ee5c5ea4fbd10cb7cea4c5d131b5646ffd11a05b8.scope - libcontainer container 1628cfd250c18a6cc2707f1ee5c5ea4fbd10cb7cea4c5d131b5646ffd11a05b8. Jan 30 13:52:17.192600 containerd[1673]: time="2025-01-30T13:52:17.192535574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-779c9796d4-x2zlh,Uid:a543be2e-641d-42ff-9b77-b91f01d9bab1,Namespace:calico-system,Attempt:0,} returns sandbox id \"1628cfd250c18a6cc2707f1ee5c5ea4fbd10cb7cea4c5d131b5646ffd11a05b8\"" Jan 30 13:52:17.205866 containerd[1673]: time="2025-01-30T13:52:17.205517881Z" level=info msg="CreateContainer within sandbox \"1628cfd250c18a6cc2707f1ee5c5ea4fbd10cb7cea4c5d131b5646ffd11a05b8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:52:17.234807 containerd[1673]: time="2025-01-30T13:52:17.234531068Z" level=info msg="CreateContainer within sandbox \"1628cfd250c18a6cc2707f1ee5c5ea4fbd10cb7cea4c5d131b5646ffd11a05b8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a72ced241345b944f61f2d6daf0a24f52fd9008cac4727d2218aad8941860ed\"" Jan 30 13:52:17.236046 containerd[1673]: time="2025-01-30T13:52:17.235787698Z" level=info msg="StartContainer for \"4a72ced241345b944f61f2d6daf0a24f52fd9008cac4727d2218aad8941860ed\"" Jan 30 13:52:17.274249 systemd[1]: Started cri-containerd-4a72ced241345b944f61f2d6daf0a24f52fd9008cac4727d2218aad8941860ed.scope - libcontainer container 4a72ced241345b944f61f2d6daf0a24f52fd9008cac4727d2218aad8941860ed. Jan 30 13:52:17.350538 containerd[1673]: time="2025-01-30T13:52:17.350480913Z" level=info msg="StartContainer for \"4a72ced241345b944f61f2d6daf0a24f52fd9008cac4727d2218aad8941860ed\" returns successfully" Jan 30 13:52:17.383215 kubelet[3113]: I0130 13:52:17.382461 3113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13aefa74-e574-433c-97ec-9a7237917ee6" path="/var/lib/kubelet/pods/13aefa74-e574-433c-97ec-9a7237917ee6/volumes" Jan 30 13:52:17.384894 kubelet[3113]: I0130 13:52:17.384849 3113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15229b91-3171-4a6d-bc5e-9d7f16114666" path="/var/lib/kubelet/pods/15229b91-3171-4a6d-bc5e-9d7f16114666/volumes" Jan 30 13:52:17.594123 systemd[1]: run-containerd-runc-k8s.io-5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d-runc.l9EjnW.mount: Deactivated successfully. Jan 30 13:52:17.654517 containerd[1673]: time="2025-01-30T13:52:17.654171103Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 30 13:52:17.657266 systemd[1]: cri-containerd-5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d.scope: Deactivated successfully. Jan 30 13:52:17.692616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d-rootfs.mount: Deactivated successfully. Jan 30 13:52:17.711999 kubelet[3113]: I0130 13:52:17.711717 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-779c9796d4-x2zlh" podStartSLOduration=2.711693565 podStartE2EDuration="2.711693565s" podCreationTimestamp="2025-01-30 13:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:17.70980322 +0000 UTC m=+74.448739991" watchObservedRunningTime="2025-01-30 13:52:17.711693565 +0000 UTC m=+74.450630436" Jan 30 13:52:17.740925 containerd[1673]: time="2025-01-30T13:52:17.740847955Z" level=info msg="shim disconnected" id=5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d namespace=k8s.io Jan 30 13:52:17.740925 containerd[1673]: time="2025-01-30T13:52:17.740921557Z" level=warning msg="cleaning up after shim disconnected" id=5fa21ebe1b7f065e47cb81d959d259066d507cd86f1f2ae6e91a64040fb5a55d namespace=k8s.io Jan 30 13:52:17.740925 containerd[1673]: time="2025-01-30T13:52:17.740932857Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:52:18.732508 containerd[1673]: time="2025-01-30T13:52:18.731526210Z" level=info msg="CreateContainer within sandbox \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:52:18.768430 containerd[1673]: time="2025-01-30T13:52:18.768292281Z" level=info msg="CreateContainer within sandbox \"c5468cf80708efda755d494d69d1aa3e8de49c1dcdac497ac57757dae40f9fcb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c\"" Jan 30 13:52:18.769191 containerd[1673]: time="2025-01-30T13:52:18.769148301Z" level=info msg="StartContainer for \"ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c\"" Jan 30 13:52:18.769697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1568396629.mount: Deactivated successfully. Jan 30 13:52:18.804201 systemd[1]: Started cri-containerd-ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c.scope - libcontainer container ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c. Jan 30 13:52:18.835400 containerd[1673]: time="2025-01-30T13:52:18.835343968Z" level=info msg="StartContainer for \"ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c\" returns successfully" Jan 30 13:52:20.377429 containerd[1673]: time="2025-01-30T13:52:20.377373677Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:52:20.473088 kubelet[3113]: I0130 13:52:20.471647 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-r9696" podStartSLOduration=5.471623608 podStartE2EDuration="5.471623608s" podCreationTimestamp="2025-01-30 13:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:19.744964204 +0000 UTC m=+76.483900975" watchObservedRunningTime="2025-01-30 13:52:20.471623608 +0000 UTC m=+77.210560379" Jan 30 13:52:20.505331 kernel: bpftool[5310]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.472 [INFO][5286] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.472 [INFO][5286] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" iface="eth0" netns="/var/run/netns/cni-d2ad32c8-35a5-a8c3-9357-66fe97589a2d" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.474 [INFO][5286] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" iface="eth0" netns="/var/run/netns/cni-d2ad32c8-35a5-a8c3-9357-66fe97589a2d" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.474 [INFO][5286] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" iface="eth0" netns="/var/run/netns/cni-d2ad32c8-35a5-a8c3-9357-66fe97589a2d" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.474 [INFO][5286] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.474 [INFO][5286] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.510 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.510 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.510 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.521 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.521 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.523 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:20.527121 containerd[1673]: 2025-01-30 13:52:20.525 [INFO][5286] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:52:20.532368 containerd[1673]: time="2025-01-30T13:52:20.532171342Z" level=info msg="TearDown network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" successfully" Jan 30 13:52:20.532368 containerd[1673]: time="2025-01-30T13:52:20.532213243Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" returns successfully" Jan 30 13:52:20.535091 containerd[1673]: time="2025-01-30T13:52:20.533297068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-vtx2w,Uid:d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:52:20.538838 systemd[1]: run-netns-cni\x2dd2ad32c8\x2d35a5\x2da8c3\x2d9357\x2d66fe97589a2d.mount: Deactivated successfully. Jan 30 13:52:20.768407 systemd-networkd[1436]: cali681922ca3a2: Link UP Jan 30 13:52:20.770614 systemd-networkd[1436]: cali681922ca3a2: Gained carrier Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.675 [INFO][5331] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0 calico-apiserver-5df8c6b8fc- calico-apiserver d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7 998 0 2025-01-30 13:51:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df8c6b8fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-38674a3e2a calico-apiserver-5df8c6b8fc-vtx2w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali681922ca3a2 [] []}} ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.676 [INFO][5331] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.714 [INFO][5341] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" HandleID="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.724 [INFO][5341] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" HandleID="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334f60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-38674a3e2a", "pod":"calico-apiserver-5df8c6b8fc-vtx2w", "timestamp":"2025-01-30 13:52:20.714719464 +0000 UTC"}, Hostname:"ci-4081.3.0-a-38674a3e2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.724 [INFO][5341] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.724 [INFO][5341] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.724 [INFO][5341] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-38674a3e2a' Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.726 [INFO][5341] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.730 [INFO][5341] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.734 [INFO][5341] ipam/ipam.go 489: Trying affinity for 192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.736 [INFO][5341] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.738 [INFO][5341] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.738 [INFO][5341] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.740 [INFO][5341] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.749 [INFO][5341] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.755 [INFO][5341] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.1/26] block=192.168.50.0/26 handle="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.755 [INFO][5341] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.1/26] handle="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.755 [INFO][5341] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:20.798804 containerd[1673]: 2025-01-30 13:52:20.755 [INFO][5341] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.1/26] IPv6=[] ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" HandleID="k8s-pod-network.2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.799780 containerd[1673]: 2025-01-30 13:52:20.757 [INFO][5331] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"", Pod:"calico-apiserver-5df8c6b8fc-vtx2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali681922ca3a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:20.799780 containerd[1673]: 2025-01-30 13:52:20.757 [INFO][5331] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.1/32] ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.799780 containerd[1673]: 2025-01-30 13:52:20.757 [INFO][5331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali681922ca3a2 ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.799780 containerd[1673]: 2025-01-30 13:52:20.770 [INFO][5331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.799780 containerd[1673]: 2025-01-30 13:52:20.773 [INFO][5331] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7", ResourceVersion:"998", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd", Pod:"calico-apiserver-5df8c6b8fc-vtx2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali681922ca3a2", MAC:"fa:c6:d9:07:56:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:20.799780 containerd[1673]: 2025-01-30 13:52:20.795 [INFO][5331] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-vtx2w" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:52:20.831145 containerd[1673]: time="2025-01-30T13:52:20.829387278Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:20.831145 containerd[1673]: time="2025-01-30T13:52:20.829451480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:20.831145 containerd[1673]: time="2025-01-30T13:52:20.829472580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:20.831145 containerd[1673]: time="2025-01-30T13:52:20.829556382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:20.869704 systemd[1]: run-containerd-runc-k8s.io-2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd-runc.xu5AG0.mount: Deactivated successfully. Jan 30 13:52:20.882253 systemd[1]: Started cri-containerd-2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd.scope - libcontainer container 2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd. Jan 30 13:52:20.936787 containerd[1673]: time="2025-01-30T13:52:20.936729420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-vtx2w,Uid:d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd\"" Jan 30 13:52:20.940440 containerd[1673]: time="2025-01-30T13:52:20.940360506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:52:21.001055 systemd-networkd[1436]: vxlan.calico: Link UP Jan 30 13:52:21.001067 systemd-networkd[1436]: vxlan.calico: Gained carrier Jan 30 13:52:22.317176 systemd-networkd[1436]: cali681922ca3a2: Gained IPv6LL Jan 30 13:52:22.509445 systemd-networkd[1436]: vxlan.calico: Gained IPv6LL Jan 30 13:52:23.115618 containerd[1673]: time="2025-01-30T13:52:23.115556954Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.117684 containerd[1673]: time="2025-01-30T13:52:23.117627503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:52:23.121151 containerd[1673]: time="2025-01-30T13:52:23.121087485Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.125428 containerd[1673]: time="2025-01-30T13:52:23.125369587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:23.126487 containerd[1673]: time="2025-01-30T13:52:23.126043803Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.185454291s" Jan 30 13:52:23.126487 containerd[1673]: time="2025-01-30T13:52:23.126100104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:52:23.129024 containerd[1673]: time="2025-01-30T13:52:23.128976872Z" level=info msg="CreateContainer within sandbox \"2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:23.161784 containerd[1673]: time="2025-01-30T13:52:23.161738749Z" level=info msg="CreateContainer within sandbox \"2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8beb42b2654d17909fc80b03c5cae5c83779100c1f8178cf37005651b0d72b47\"" Jan 30 13:52:23.163082 containerd[1673]: time="2025-01-30T13:52:23.162356264Z" level=info msg="StartContainer for \"8beb42b2654d17909fc80b03c5cae5c83779100c1f8178cf37005651b0d72b47\"" Jan 30 13:52:23.200210 systemd[1]: Started cri-containerd-8beb42b2654d17909fc80b03c5cae5c83779100c1f8178cf37005651b0d72b47.scope - libcontainer container 8beb42b2654d17909fc80b03c5cae5c83779100c1f8178cf37005651b0d72b47. Jan 30 13:52:23.244426 containerd[1673]: time="2025-01-30T13:52:23.244371809Z" level=info msg="StartContainer for \"8beb42b2654d17909fc80b03c5cae5c83779100c1f8178cf37005651b0d72b47\" returns successfully" Jan 30 13:52:23.382391 containerd[1673]: time="2025-01-30T13:52:23.382132876Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.443 [INFO][5529] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.444 [INFO][5529] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" iface="eth0" netns="/var/run/netns/cni-f42b45b6-1bca-0f33-175f-05c05b11ad05" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.444 [INFO][5529] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" iface="eth0" netns="/var/run/netns/cni-f42b45b6-1bca-0f33-175f-05c05b11ad05" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.445 [INFO][5529] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" iface="eth0" netns="/var/run/netns/cni-f42b45b6-1bca-0f33-175f-05c05b11ad05" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.445 [INFO][5529] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.447 [INFO][5529] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.486 [INFO][5536] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.486 [INFO][5536] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.486 [INFO][5536] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.493 [WARNING][5536] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.493 [INFO][5536] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.495 [INFO][5536] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:23.498160 containerd[1673]: 2025-01-30 13:52:23.496 [INFO][5529] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:52:23.500432 containerd[1673]: time="2025-01-30T13:52:23.500224976Z" level=info msg="TearDown network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" successfully" Jan 30 13:52:23.500432 containerd[1673]: time="2025-01-30T13:52:23.500268277Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" returns successfully" Jan 30 13:52:23.501663 containerd[1673]: time="2025-01-30T13:52:23.501629810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z8997,Uid:7c2a0430-bfea-48a8-b9a0-8ea183a3114a,Namespace:kube-system,Attempt:1,}" Jan 30 13:52:23.503752 systemd[1]: run-netns-cni\x2df42b45b6\x2d1bca\x2d0f33\x2d175f\x2d05c05b11ad05.mount: Deactivated successfully. Jan 30 13:52:23.698573 systemd-networkd[1436]: calie1b12102759: Link UP Jan 30 13:52:23.698787 systemd-networkd[1436]: calie1b12102759: Gained carrier Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.596 [INFO][5543] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0 coredns-6f6b679f8f- kube-system 7c2a0430-bfea-48a8-b9a0-8ea183a3114a 1014 0 2025-01-30 13:51:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-38674a3e2a coredns-6f6b679f8f-z8997 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie1b12102759 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.597 [INFO][5543] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.639 [INFO][5553] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" HandleID="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.650 [INFO][5553] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" HandleID="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003196b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-38674a3e2a", "pod":"coredns-6f6b679f8f-z8997", "timestamp":"2025-01-30 13:52:23.639641783 +0000 UTC"}, Hostname:"ci-4081.3.0-a-38674a3e2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.650 [INFO][5553] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.650 [INFO][5553] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.650 [INFO][5553] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-38674a3e2a' Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.653 [INFO][5553] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.657 [INFO][5553] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.664 [INFO][5553] ipam/ipam.go 489: Trying affinity for 192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.665 [INFO][5553] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.668 [INFO][5553] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.668 [INFO][5553] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.672 [INFO][5553] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1 Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.683 [INFO][5553] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.693 [INFO][5553] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.2/26] block=192.168.50.0/26 handle="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.693 [INFO][5553] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.2/26] handle="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.693 [INFO][5553] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:23.721951 containerd[1673]: 2025-01-30 13:52:23.693 [INFO][5553] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.2/26] IPv6=[] ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" HandleID="k8s-pod-network.b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.724060 containerd[1673]: 2025-01-30 13:52:23.695 [INFO][5543] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7c2a0430-bfea-48a8-b9a0-8ea183a3114a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"", Pod:"coredns-6f6b679f8f-z8997", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1b12102759", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:23.724060 containerd[1673]: 2025-01-30 13:52:23.695 [INFO][5543] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.2/32] ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.724060 containerd[1673]: 2025-01-30 13:52:23.695 [INFO][5543] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1b12102759 ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.724060 containerd[1673]: 2025-01-30 13:52:23.699 [INFO][5543] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.724060 containerd[1673]: 2025-01-30 13:52:23.700 [INFO][5543] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7c2a0430-bfea-48a8-b9a0-8ea183a3114a", ResourceVersion:"1014", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1", Pod:"coredns-6f6b679f8f-z8997", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1b12102759", MAC:"e2:2c:76:b7:40:98", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:23.724060 containerd[1673]: 2025-01-30 13:52:23.719 [INFO][5543] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1" Namespace="kube-system" Pod="coredns-6f6b679f8f-z8997" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:52:23.778471 containerd[1673]: time="2025-01-30T13:52:23.778220769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:23.778471 containerd[1673]: time="2025-01-30T13:52:23.778344572Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:23.778953 containerd[1673]: time="2025-01-30T13:52:23.778397173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:23.778953 containerd[1673]: time="2025-01-30T13:52:23.778832084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:23.808198 systemd[1]: Started cri-containerd-b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1.scope - libcontainer container b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1. Jan 30 13:52:23.874136 containerd[1673]: time="2025-01-30T13:52:23.873771335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-z8997,Uid:7c2a0430-bfea-48a8-b9a0-8ea183a3114a,Namespace:kube-system,Attempt:1,} returns sandbox id \"b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1\"" Jan 30 13:52:23.884866 containerd[1673]: time="2025-01-30T13:52:23.884819097Z" level=info msg="CreateContainer within sandbox \"b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:23.919797 containerd[1673]: time="2025-01-30T13:52:23.919736025Z" level=info msg="CreateContainer within sandbox \"b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a5db738b816b046714353df1fb7487f84e6522aa01419788745bdb034a25e088\"" Jan 30 13:52:23.921389 containerd[1673]: time="2025-01-30T13:52:23.921348963Z" level=info msg="StartContainer for \"a5db738b816b046714353df1fb7487f84e6522aa01419788745bdb034a25e088\"" Jan 30 13:52:23.953178 systemd[1]: Started cri-containerd-a5db738b816b046714353df1fb7487f84e6522aa01419788745bdb034a25e088.scope - libcontainer container a5db738b816b046714353df1fb7487f84e6522aa01419788745bdb034a25e088. Jan 30 13:52:23.988035 containerd[1673]: time="2025-01-30T13:52:23.987883341Z" level=info msg="StartContainer for \"a5db738b816b046714353df1fb7487f84e6522aa01419788745bdb034a25e088\" returns successfully" Jan 30 13:52:24.247615 kubelet[3113]: I0130 13:52:24.247278 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-vtx2w" podStartSLOduration=67.058912932 podStartE2EDuration="1m9.247240592s" podCreationTimestamp="2025-01-30 13:51:15 +0000 UTC" firstStartedPulling="2025-01-30 13:52:20.938822969 +0000 UTC m=+77.677759840" lastFinishedPulling="2025-01-30 13:52:23.127150729 +0000 UTC m=+79.866087500" observedRunningTime="2025-01-30 13:52:23.753756489 +0000 UTC m=+80.492693260" watchObservedRunningTime="2025-01-30 13:52:24.247240592 +0000 UTC m=+80.986177463" Jan 30 13:52:24.377028 containerd[1673]: time="2025-01-30T13:52:24.376548359Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:52:24.378535 containerd[1673]: time="2025-01-30T13:52:24.378127196Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.457 [INFO][5684] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.459 [INFO][5684] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" iface="eth0" netns="/var/run/netns/cni-c77d315c-c00b-2d6f-3c6b-8f1a7ec122cd" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.459 [INFO][5684] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" iface="eth0" netns="/var/run/netns/cni-c77d315c-c00b-2d6f-3c6b-8f1a7ec122cd" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.459 [INFO][5684] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" iface="eth0" netns="/var/run/netns/cni-c77d315c-c00b-2d6f-3c6b-8f1a7ec122cd" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.460 [INFO][5684] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.460 [INFO][5684] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.493 [INFO][5696] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.494 [INFO][5696] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.494 [INFO][5696] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.508 [WARNING][5696] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.508 [INFO][5696] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.511 [INFO][5696] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:24.514082 containerd[1673]: 2025-01-30 13:52:24.512 [INFO][5684] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:52:24.517893 containerd[1673]: time="2025-01-30T13:52:24.517082691Z" level=info msg="TearDown network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" successfully" Jan 30 13:52:24.517893 containerd[1673]: time="2025-01-30T13:52:24.517144393Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" returns successfully" Jan 30 13:52:24.520396 systemd[1]: run-netns-cni\x2dc77d315c\x2dc00b\x2d2d6f\x2d3c6b\x2d8f1a7ec122cd.mount: Deactivated successfully. Jan 30 13:52:24.523506 containerd[1673]: time="2025-01-30T13:52:24.523122235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-2m7hp,Uid:7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.465 [INFO][5683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.465 [INFO][5683] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" iface="eth0" netns="/var/run/netns/cni-67ccacc0-c551-5668-241d-950391f1190d" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.467 [INFO][5683] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" iface="eth0" netns="/var/run/netns/cni-67ccacc0-c551-5668-241d-950391f1190d" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.469 [INFO][5683] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" iface="eth0" netns="/var/run/netns/cni-67ccacc0-c551-5668-241d-950391f1190d" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.469 [INFO][5683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.469 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.504 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.504 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.511 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.524 [WARNING][5700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.524 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.526 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:24.528532 containerd[1673]: 2025-01-30 13:52:24.527 [INFO][5683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:52:24.529917 containerd[1673]: time="2025-01-30T13:52:24.528786969Z" level=info msg="TearDown network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" successfully" Jan 30 13:52:24.529917 containerd[1673]: time="2025-01-30T13:52:24.528814070Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" returns successfully" Jan 30 13:52:24.529917 containerd[1673]: time="2025-01-30T13:52:24.529470085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mcmv9,Uid:1654d24a-276c-4733-ab3c-b2a324f91922,Namespace:calico-system,Attempt:1,}" Jan 30 13:52:24.533772 systemd[1]: run-netns-cni\x2d67ccacc0\x2dc551\x2d5668\x2d241d\x2d950391f1190d.mount: Deactivated successfully. Jan 30 13:52:24.723538 systemd-networkd[1436]: calieafc116ac8c: Link UP Jan 30 13:52:24.724448 systemd-networkd[1436]: calieafc116ac8c: Gained carrier Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.639 [INFO][5709] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0 csi-node-driver- calico-system 1654d24a-276c-4733-ab3c-b2a324f91922 1037 0 2025-01-30 13:51:14 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-38674a3e2a csi-node-driver-mcmv9 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calieafc116ac8c [] []}} ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.639 [INFO][5709] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.679 [INFO][5732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" HandleID="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.689 [INFO][5732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" HandleID="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318a90), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-38674a3e2a", "pod":"csi-node-driver-mcmv9", "timestamp":"2025-01-30 13:52:24.679249937 +0000 UTC"}, Hostname:"ci-4081.3.0-a-38674a3e2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.689 [INFO][5732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.690 [INFO][5732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.690 [INFO][5732] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-38674a3e2a' Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.691 [INFO][5732] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.697 [INFO][5732] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.700 [INFO][5732] ipam/ipam.go 489: Trying affinity for 192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.702 [INFO][5732] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.704 [INFO][5732] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.704 [INFO][5732] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.705 [INFO][5732] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.709 [INFO][5732] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.717 [INFO][5732] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.3/26] block=192.168.50.0/26 handle="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.717 [INFO][5732] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.3/26] handle="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.717 [INFO][5732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:24.749365 containerd[1673]: 2025-01-30 13:52:24.717 [INFO][5732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.3/26] IPv6=[] ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" HandleID="k8s-pod-network.9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.751314 containerd[1673]: 2025-01-30 13:52:24.719 [INFO][5709] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1654d24a-276c-4733-ab3c-b2a324f91922", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"", Pod:"csi-node-driver-mcmv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieafc116ac8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:24.751314 containerd[1673]: 2025-01-30 13:52:24.719 [INFO][5709] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.3/32] ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.751314 containerd[1673]: 2025-01-30 13:52:24.719 [INFO][5709] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieafc116ac8c ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.751314 containerd[1673]: 2025-01-30 13:52:24.725 [INFO][5709] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.751314 containerd[1673]: 2025-01-30 13:52:24.725 [INFO][5709] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1654d24a-276c-4733-ab3c-b2a324f91922", ResourceVersion:"1037", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee", Pod:"csi-node-driver-mcmv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieafc116ac8c", MAC:"0e:d9:73:16:b5:db", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:24.751314 containerd[1673]: 2025-01-30 13:52:24.744 [INFO][5709] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee" Namespace="calico-system" Pod="csi-node-driver-mcmv9" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:52:24.768085 kubelet[3113]: I0130 13:52:24.765762 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-z8997" podStartSLOduration=76.765589185 podStartE2EDuration="1m16.765589185s" podCreationTimestamp="2025-01-30 13:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:24.765155175 +0000 UTC m=+81.504092046" watchObservedRunningTime="2025-01-30 13:52:24.765589185 +0000 UTC m=+81.504525956" Jan 30 13:52:24.793330 containerd[1673]: time="2025-01-30T13:52:24.793216840Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:24.793330 containerd[1673]: time="2025-01-30T13:52:24.793300342Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:24.794452 containerd[1673]: time="2025-01-30T13:52:24.793751153Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:24.794452 containerd[1673]: time="2025-01-30T13:52:24.793936557Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:24.819791 systemd[1]: Started cri-containerd-9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee.scope - libcontainer container 9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee. Jan 30 13:52:24.852735 systemd-networkd[1436]: cali79ad5a2b51c: Link UP Jan 30 13:52:24.855884 systemd-networkd[1436]: cali79ad5a2b51c: Gained carrier Jan 30 13:52:24.869838 containerd[1673]: time="2025-01-30T13:52:24.869786856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mcmv9,Uid:1654d24a-276c-4733-ab3c-b2a324f91922,Namespace:calico-system,Attempt:1,} returns sandbox id \"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee\"" Jan 30 13:52:24.873328 containerd[1673]: time="2025-01-30T13:52:24.873256638Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.641 [INFO][5720] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0 calico-apiserver-5df8c6b8fc- calico-apiserver 7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1 1036 0 2025-01-30 13:51:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5df8c6b8fc projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-38674a3e2a calico-apiserver-5df8c6b8fc-2m7hp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali79ad5a2b51c [] []}} ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.641 [INFO][5720] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.687 [INFO][5736] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" HandleID="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.696 [INFO][5736] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" HandleID="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318e90), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-38674a3e2a", "pod":"calico-apiserver-5df8c6b8fc-2m7hp", "timestamp":"2025-01-30 13:52:24.687574535 +0000 UTC"}, Hostname:"ci-4081.3.0-a-38674a3e2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.696 [INFO][5736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.717 [INFO][5736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.717 [INFO][5736] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-38674a3e2a' Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.795 [INFO][5736] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.803 [INFO][5736] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.809 [INFO][5736] ipam/ipam.go 489: Trying affinity for 192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.816 [INFO][5736] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.822 [INFO][5736] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.822 [INFO][5736] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.825 [INFO][5736] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.831 [INFO][5736] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.842 [INFO][5736] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.4/26] block=192.168.50.0/26 handle="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.842 [INFO][5736] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.4/26] handle="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.842 [INFO][5736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:24.886351 containerd[1673]: 2025-01-30 13:52:24.842 [INFO][5736] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.4/26] IPv6=[] ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" HandleID="k8s-pod-network.9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.887321 containerd[1673]: 2025-01-30 13:52:24.844 [INFO][5720] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"", Pod:"calico-apiserver-5df8c6b8fc-2m7hp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79ad5a2b51c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:24.887321 containerd[1673]: 2025-01-30 13:52:24.844 [INFO][5720] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.4/32] ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.887321 containerd[1673]: 2025-01-30 13:52:24.844 [INFO][5720] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali79ad5a2b51c ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.887321 containerd[1673]: 2025-01-30 13:52:24.857 [INFO][5720] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.887321 containerd[1673]: 2025-01-30 13:52:24.859 [INFO][5720] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1", ResourceVersion:"1036", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c", Pod:"calico-apiserver-5df8c6b8fc-2m7hp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79ad5a2b51c", MAC:"86:3c:d8:b3:8f:8e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:24.887321 containerd[1673]: 2025-01-30 13:52:24.882 [INFO][5720] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c" Namespace="calico-apiserver" Pod="calico-apiserver-5df8c6b8fc-2m7hp" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:52:24.918313 containerd[1673]: time="2025-01-30T13:52:24.918056201Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:24.918313 containerd[1673]: time="2025-01-30T13:52:24.918113302Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:24.918313 containerd[1673]: time="2025-01-30T13:52:24.918123602Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:24.918313 containerd[1673]: time="2025-01-30T13:52:24.918201804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:24.936209 systemd[1]: Started cri-containerd-9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c.scope - libcontainer container 9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c. Jan 30 13:52:24.978176 containerd[1673]: time="2025-01-30T13:52:24.978118625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5df8c6b8fc-2m7hp,Uid:7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c\"" Jan 30 13:52:24.981656 containerd[1673]: time="2025-01-30T13:52:24.981596208Z" level=info msg="CreateContainer within sandbox \"9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:52:25.010514 containerd[1673]: time="2025-01-30T13:52:25.010459092Z" level=info msg="CreateContainer within sandbox \"9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"99e75ed410875894eec279dcaa9ff745900dad19d19c2e0e3fdf709a427f44a4\"" Jan 30 13:52:25.011347 containerd[1673]: time="2025-01-30T13:52:25.011311112Z" level=info msg="StartContainer for \"99e75ed410875894eec279dcaa9ff745900dad19d19c2e0e3fdf709a427f44a4\"" Jan 30 13:52:25.042273 systemd[1]: Started cri-containerd-99e75ed410875894eec279dcaa9ff745900dad19d19c2e0e3fdf709a427f44a4.scope - libcontainer container 99e75ed410875894eec279dcaa9ff745900dad19d19c2e0e3fdf709a427f44a4. Jan 30 13:52:25.090289 containerd[1673]: time="2025-01-30T13:52:25.090232284Z" level=info msg="StartContainer for \"99e75ed410875894eec279dcaa9ff745900dad19d19c2e0e3fdf709a427f44a4\" returns successfully" Jan 30 13:52:25.389273 systemd-networkd[1436]: calie1b12102759: Gained IPv6LL Jan 30 13:52:25.775325 kubelet[3113]: I0130 13:52:25.774762 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5df8c6b8fc-2m7hp" podStartSLOduration=70.774737317 podStartE2EDuration="1m10.774737317s" podCreationTimestamp="2025-01-30 13:51:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:25.774510612 +0000 UTC m=+82.513447483" watchObservedRunningTime="2025-01-30 13:52:25.774737317 +0000 UTC m=+82.513674088" Jan 30 13:52:26.093224 systemd-networkd[1436]: calieafc116ac8c: Gained IPv6LL Jan 30 13:52:26.157262 systemd-networkd[1436]: cali79ad5a2b51c: Gained IPv6LL Jan 30 13:52:26.326107 containerd[1673]: time="2025-01-30T13:52:26.325777785Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:26.328355 containerd[1673]: time="2025-01-30T13:52:26.328302445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:52:26.331717 containerd[1673]: time="2025-01-30T13:52:26.331672125Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:26.337690 containerd[1673]: time="2025-01-30T13:52:26.337091153Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:26.338571 containerd[1673]: time="2025-01-30T13:52:26.337858772Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.464525031s" Jan 30 13:52:26.338571 containerd[1673]: time="2025-01-30T13:52:26.337897073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:52:26.345550 containerd[1673]: time="2025-01-30T13:52:26.345442151Z" level=info msg="CreateContainer within sandbox \"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:52:26.390044 containerd[1673]: time="2025-01-30T13:52:26.389938807Z" level=info msg="CreateContainer within sandbox \"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"49f07f163b107326c9df0631cf55c9804c3450da16e576919db17ae76f4e7349\"" Jan 30 13:52:26.390895 containerd[1673]: time="2025-01-30T13:52:26.390658124Z" level=info msg="StartContainer for \"49f07f163b107326c9df0631cf55c9804c3450da16e576919db17ae76f4e7349\"" Jan 30 13:52:26.434196 systemd[1]: Started cri-containerd-49f07f163b107326c9df0631cf55c9804c3450da16e576919db17ae76f4e7349.scope - libcontainer container 49f07f163b107326c9df0631cf55c9804c3450da16e576919db17ae76f4e7349. Jan 30 13:52:26.471615 containerd[1673]: time="2025-01-30T13:52:26.471445040Z" level=info msg="StartContainer for \"49f07f163b107326c9df0631cf55c9804c3450da16e576919db17ae76f4e7349\" returns successfully" Jan 30 13:52:26.473864 containerd[1673]: time="2025-01-30T13:52:26.473575290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:52:27.765379 containerd[1673]: time="2025-01-30T13:52:27.765326824Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:27.768119 containerd[1673]: time="2025-01-30T13:52:27.767987688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:52:27.771616 containerd[1673]: time="2025-01-30T13:52:27.771562372Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:27.775961 containerd[1673]: time="2025-01-30T13:52:27.775906275Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:27.776741 containerd[1673]: time="2025-01-30T13:52:27.776608192Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.302978001s" Jan 30 13:52:27.776741 containerd[1673]: time="2025-01-30T13:52:27.776648393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:52:27.779087 containerd[1673]: time="2025-01-30T13:52:27.778920247Z" level=info msg="CreateContainer within sandbox \"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:52:27.820805 containerd[1673]: time="2025-01-30T13:52:27.820757439Z" level=info msg="CreateContainer within sandbox \"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"d8ac94ac251fd6a8fd0adb537cb75e0dfbfca2cf60e57ea9172c3c7e00826c84\"" Jan 30 13:52:27.828830 containerd[1673]: time="2025-01-30T13:52:27.826993987Z" level=info msg="StartContainer for \"d8ac94ac251fd6a8fd0adb537cb75e0dfbfca2cf60e57ea9172c3c7e00826c84\"" Jan 30 13:52:27.872184 systemd[1]: Started cri-containerd-d8ac94ac251fd6a8fd0adb537cb75e0dfbfca2cf60e57ea9172c3c7e00826c84.scope - libcontainer container d8ac94ac251fd6a8fd0adb537cb75e0dfbfca2cf60e57ea9172c3c7e00826c84. Jan 30 13:52:27.903927 containerd[1673]: time="2025-01-30T13:52:27.903878510Z" level=info msg="StartContainer for \"d8ac94ac251fd6a8fd0adb537cb75e0dfbfca2cf60e57ea9172c3c7e00826c84\" returns successfully" Jan 30 13:52:28.376172 containerd[1673]: time="2025-01-30T13:52:28.375797202Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.419 [INFO][5993] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.421 [INFO][5993] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" iface="eth0" netns="/var/run/netns/cni-b7f33d19-216d-650f-9148-ffb8b8c9e5b2" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.422 [INFO][5993] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" iface="eth0" netns="/var/run/netns/cni-b7f33d19-216d-650f-9148-ffb8b8c9e5b2" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.425 [INFO][5993] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" iface="eth0" netns="/var/run/netns/cni-b7f33d19-216d-650f-9148-ffb8b8c9e5b2" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.425 [INFO][5993] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.425 [INFO][5993] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.445 [INFO][5999] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.445 [INFO][5999] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.445 [INFO][5999] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.452 [WARNING][5999] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.452 [INFO][5999] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.453 [INFO][5999] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.455743 containerd[1673]: 2025-01-30 13:52:28.454 [INFO][5993] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:52:28.456617 containerd[1673]: time="2025-01-30T13:52:28.456117907Z" level=info msg="TearDown network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" successfully" Jan 30 13:52:28.456617 containerd[1673]: time="2025-01-30T13:52:28.456154308Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" returns successfully" Jan 30 13:52:28.458253 containerd[1673]: time="2025-01-30T13:52:28.457555041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4g4nh,Uid:f7fbcb71-4682-4a9b-9734-5a668b2754b3,Namespace:kube-system,Attempt:1,}" Jan 30 13:52:28.459702 systemd[1]: run-netns-cni\x2db7f33d19\x2d216d\x2d650f\x2d9148\x2dffb8b8c9e5b2.mount: Deactivated successfully. Jan 30 13:52:28.498248 kubelet[3113]: I0130 13:52:28.497960 3113 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:52:28.498248 kubelet[3113]: I0130 13:52:28.498156 3113 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:52:28.603402 systemd-networkd[1436]: cali0aba16c7e1d: Link UP Jan 30 13:52:28.603703 systemd-networkd[1436]: cali0aba16c7e1d: Gained carrier Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.532 [INFO][6006] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0 coredns-6f6b679f8f- kube-system f7fbcb71-4682-4a9b-9734-5a668b2754b3 1080 0 2025-01-30 13:51:08 +0000 UTC map[k8s-app:kube-dns pod-template-hash:6f6b679f8f projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-38674a3e2a coredns-6f6b679f8f-4g4nh eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali0aba16c7e1d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.532 [INFO][6006] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.557 [INFO][6016] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" HandleID="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.567 [INFO][6016] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" HandleID="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292ae0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-38674a3e2a", "pod":"coredns-6f6b679f8f-4g4nh", "timestamp":"2025-01-30 13:52:28.557955722 +0000 UTC"}, Hostname:"ci-4081.3.0-a-38674a3e2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.568 [INFO][6016] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.568 [INFO][6016] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.568 [INFO][6016] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-38674a3e2a' Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.570 [INFO][6016] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.575 [INFO][6016] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.579 [INFO][6016] ipam/ipam.go 489: Trying affinity for 192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.581 [INFO][6016] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.582 [INFO][6016] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.583 [INFO][6016] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.584 [INFO][6016] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31 Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.588 [INFO][6016] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.597 [INFO][6016] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.5/26] block=192.168.50.0/26 handle="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.597 [INFO][6016] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.5/26] handle="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.597 [INFO][6016] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:28.623448 containerd[1673]: 2025-01-30 13:52:28.598 [INFO][6016] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.5/26] IPv6=[] ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" HandleID="k8s-pod-network.03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.625410 containerd[1673]: 2025-01-30 13:52:28.599 [INFO][6006] cni-plugin/k8s.go 386: Populated endpoint ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7fbcb71-4682-4a9b-9734-5a668b2754b3", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"", Pod:"coredns-6f6b679f8f-4g4nh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aba16c7e1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.625410 containerd[1673]: 2025-01-30 13:52:28.600 [INFO][6006] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.5/32] ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.625410 containerd[1673]: 2025-01-30 13:52:28.600 [INFO][6006] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0aba16c7e1d ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.625410 containerd[1673]: 2025-01-30 13:52:28.604 [INFO][6006] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.625410 containerd[1673]: 2025-01-30 13:52:28.605 [INFO][6006] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7fbcb71-4682-4a9b-9734-5a668b2754b3", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31", Pod:"coredns-6f6b679f8f-4g4nh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aba16c7e1d", MAC:"ba:44:41:ad:32:9b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:28.625410 containerd[1673]: 2025-01-30 13:52:28.618 [INFO][6006] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31" Namespace="kube-system" Pod="coredns-6f6b679f8f-4g4nh" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:52:28.651953 containerd[1673]: time="2025-01-30T13:52:28.651693645Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:28.652156 containerd[1673]: time="2025-01-30T13:52:28.651770547Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:28.652156 containerd[1673]: time="2025-01-30T13:52:28.651882149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:28.652993 containerd[1673]: time="2025-01-30T13:52:28.652754470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:28.676208 systemd[1]: Started cri-containerd-03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31.scope - libcontainer container 03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31. Jan 30 13:52:28.718933 containerd[1673]: time="2025-01-30T13:52:28.718884738Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-4g4nh,Uid:f7fbcb71-4682-4a9b-9734-5a668b2754b3,Namespace:kube-system,Attempt:1,} returns sandbox id \"03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31\"" Jan 30 13:52:28.724152 containerd[1673]: time="2025-01-30T13:52:28.724090662Z" level=info msg="CreateContainer within sandbox \"03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:52:28.759804 containerd[1673]: time="2025-01-30T13:52:28.759677706Z" level=info msg="CreateContainer within sandbox \"03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"59b469795f851482b5a967740144248c4df36b932fcd6dce73799c66144200f2\"" Jan 30 13:52:28.762024 containerd[1673]: time="2025-01-30T13:52:28.761081939Z" level=info msg="StartContainer for \"59b469795f851482b5a967740144248c4df36b932fcd6dce73799c66144200f2\"" Jan 30 13:52:28.800029 kubelet[3113]: I0130 13:52:28.799813 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mcmv9" podStartSLOduration=71.895019569 podStartE2EDuration="1m14.799785557s" podCreationTimestamp="2025-01-30 13:51:14 +0000 UTC" firstStartedPulling="2025-01-30 13:52:24.872720025 +0000 UTC m=+81.611656796" lastFinishedPulling="2025-01-30 13:52:27.777485913 +0000 UTC m=+84.516422784" observedRunningTime="2025-01-30 13:52:28.797077793 +0000 UTC m=+85.536014564" watchObservedRunningTime="2025-01-30 13:52:28.799785557 +0000 UTC m=+85.538722328" Jan 30 13:52:28.812127 systemd[1]: Started cri-containerd-59b469795f851482b5a967740144248c4df36b932fcd6dce73799c66144200f2.scope - libcontainer container 59b469795f851482b5a967740144248c4df36b932fcd6dce73799c66144200f2. Jan 30 13:52:28.865130 containerd[1673]: time="2025-01-30T13:52:28.865075005Z" level=info msg="StartContainer for \"59b469795f851482b5a967740144248c4df36b932fcd6dce73799c66144200f2\" returns successfully" Jan 30 13:52:29.377782 containerd[1673]: time="2025-01-30T13:52:29.376745440Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.422 [INFO][6129] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.423 [INFO][6129] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" iface="eth0" netns="/var/run/netns/cni-ca4d5152-8820-1c71-e448-c376cdbcae34" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.423 [INFO][6129] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" iface="eth0" netns="/var/run/netns/cni-ca4d5152-8820-1c71-e448-c376cdbcae34" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.425 [INFO][6129] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" iface="eth0" netns="/var/run/netns/cni-ca4d5152-8820-1c71-e448-c376cdbcae34" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.425 [INFO][6129] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.425 [INFO][6129] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.445 [INFO][6135] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.445 [INFO][6135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.445 [INFO][6135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.450 [WARNING][6135] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.450 [INFO][6135] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.453 [INFO][6135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:29.459038 containerd[1673]: 2025-01-30 13:52:29.455 [INFO][6129] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:52:29.460660 containerd[1673]: time="2025-01-30T13:52:29.459883711Z" level=info msg="TearDown network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" successfully" Jan 30 13:52:29.460660 containerd[1673]: time="2025-01-30T13:52:29.459939113Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" returns successfully" Jan 30 13:52:29.462103 systemd[1]: run-netns-cni\x2dca4d5152\x2d8820\x2d1c71\x2de448\x2dc376cdbcae34.mount: Deactivated successfully. Jan 30 13:52:29.484641 kubelet[3113]: I0130 13:52:29.484207 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6962ed73-d508-4340-81ff-7f3201a82a70-tigera-ca-bundle\") pod \"6962ed73-d508-4340-81ff-7f3201a82a70\" (UID: \"6962ed73-d508-4340-81ff-7f3201a82a70\") " Jan 30 13:52:29.484641 kubelet[3113]: I0130 13:52:29.484273 3113 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bl9f9\" (UniqueName: \"kubernetes.io/projected/6962ed73-d508-4340-81ff-7f3201a82a70-kube-api-access-bl9f9\") pod \"6962ed73-d508-4340-81ff-7f3201a82a70\" (UID: \"6962ed73-d508-4340-81ff-7f3201a82a70\") " Jan 30 13:52:29.484641 kubelet[3113]: I0130 13:52:29.484605 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6962ed73-d508-4340-81ff-7f3201a82a70-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "6962ed73-d508-4340-81ff-7f3201a82a70" (UID: "6962ed73-d508-4340-81ff-7f3201a82a70"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:52:29.489156 kubelet[3113]: I0130 13:52:29.489123 3113 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6962ed73-d508-4340-81ff-7f3201a82a70-kube-api-access-bl9f9" (OuterVolumeSpecName: "kube-api-access-bl9f9") pod "6962ed73-d508-4340-81ff-7f3201a82a70" (UID: "6962ed73-d508-4340-81ff-7f3201a82a70"). InnerVolumeSpecName "kube-api-access-bl9f9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:52:29.490746 systemd[1]: var-lib-kubelet-pods-6962ed73\x2dd508\x2d4340\x2d81ff\x2d7f3201a82a70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbl9f9.mount: Deactivated successfully. Jan 30 13:52:29.585546 kubelet[3113]: I0130 13:52:29.585491 3113 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6962ed73-d508-4340-81ff-7f3201a82a70-tigera-ca-bundle\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:29.585546 kubelet[3113]: I0130 13:52:29.585536 3113 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-bl9f9\" (UniqueName: \"kubernetes.io/projected/6962ed73-d508-4340-81ff-7f3201a82a70-kube-api-access-bl9f9\") on node \"ci-4081.3.0-a-38674a3e2a\" DevicePath \"\"" Jan 30 13:52:29.793507 systemd[1]: Removed slice kubepods-besteffort-pod6962ed73_d508_4340_81ff_7f3201a82a70.slice - libcontainer container kubepods-besteffort-pod6962ed73_d508_4340_81ff_7f3201a82a70.slice. Jan 30 13:52:29.805594 kubelet[3113]: I0130 13:52:29.805518 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-4g4nh" podStartSLOduration=81.805494108 podStartE2EDuration="1m21.805494108s" podCreationTimestamp="2025-01-30 13:51:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:52:29.804470283 +0000 UTC m=+86.543407154" watchObservedRunningTime="2025-01-30 13:52:29.805494108 +0000 UTC m=+86.544430879" Jan 30 13:52:29.891726 systemd[1]: Created slice kubepods-besteffort-pod5bbd649c_49c0_47c2_9aa3_6d2c890d8fb7.slice - libcontainer container kubepods-besteffort-pod5bbd649c_49c0_47c2_9aa3_6d2c890d8fb7.slice. Jan 30 13:52:29.989539 kubelet[3113]: I0130 13:52:29.989480 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7-tigera-ca-bundle\") pod \"calico-kube-controllers-8754b5b8-d48qg\" (UID: \"5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7\") " pod="calico-system/calico-kube-controllers-8754b5b8-d48qg" Jan 30 13:52:29.989539 kubelet[3113]: I0130 13:52:29.989532 3113 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwhsk\" (UniqueName: \"kubernetes.io/projected/5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7-kube-api-access-wwhsk\") pod \"calico-kube-controllers-8754b5b8-d48qg\" (UID: \"5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7\") " pod="calico-system/calico-kube-controllers-8754b5b8-d48qg" Jan 30 13:52:30.196047 containerd[1673]: time="2025-01-30T13:52:30.195944917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8754b5b8-d48qg,Uid:5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7,Namespace:calico-system,Attempt:0,}" Jan 30 13:52:30.337679 systemd-networkd[1436]: cali92595af894d: Link UP Jan 30 13:52:30.338990 systemd-networkd[1436]: cali92595af894d: Gained carrier Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.268 [INFO][6145] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0 calico-kube-controllers-8754b5b8- calico-system 5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7 1118 0 2025-01-30 13:52:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:8754b5b8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-38674a3e2a calico-kube-controllers-8754b5b8-d48qg eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali92595af894d [] []}} ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.268 [INFO][6145] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.295 [INFO][6157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" HandleID="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.303 [INFO][6157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" HandleID="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318040), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-38674a3e2a", "pod":"calico-kube-controllers-8754b5b8-d48qg", "timestamp":"2025-01-30 13:52:30.295120152 +0000 UTC"}, Hostname:"ci-4081.3.0-a-38674a3e2a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.303 [INFO][6157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.303 [INFO][6157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.303 [INFO][6157] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-38674a3e2a' Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.305 [INFO][6157] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.308 [INFO][6157] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.311 [INFO][6157] ipam/ipam.go 489: Trying affinity for 192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.313 [INFO][6157] ipam/ipam.go 155: Attempting to load block cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.315 [INFO][6157] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.50.0/26 host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.315 [INFO][6157] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.50.0/26 handle="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.316 [INFO][6157] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3 Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.324 [INFO][6157] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.50.0/26 handle="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.332 [INFO][6157] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.50.6/26] block=192.168.50.0/26 handle="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.332 [INFO][6157] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.50.6/26] handle="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" host="ci-4081.3.0-a-38674a3e2a" Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.332 [INFO][6157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:52:30.357620 containerd[1673]: 2025-01-30 13:52:30.332 [INFO][6157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.50.6/26] IPv6=[] ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" HandleID="k8s-pod-network.7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.359976 containerd[1673]: 2025-01-30 13:52:30.334 [INFO][6145] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0", GenerateName:"calico-kube-controllers-8754b5b8-", Namespace:"calico-system", SelfLink:"", UID:"5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8754b5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"", Pod:"calico-kube-controllers-8754b5b8-d48qg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92595af894d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:30.359976 containerd[1673]: 2025-01-30 13:52:30.334 [INFO][6145] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.50.6/32] ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.359976 containerd[1673]: 2025-01-30 13:52:30.334 [INFO][6145] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali92595af894d ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.359976 containerd[1673]: 2025-01-30 13:52:30.339 [INFO][6145] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.359976 containerd[1673]: 2025-01-30 13:52:30.340 [INFO][6145] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0", GenerateName:"calico-kube-controllers-8754b5b8-", Namespace:"calico-system", SelfLink:"", UID:"5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7", ResourceVersion:"1118", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 52, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"8754b5b8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3", Pod:"calico-kube-controllers-8754b5b8-d48qg", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.50.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali92595af894d", MAC:"86:1d:d7:39:27:60", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:52:30.359976 containerd[1673]: 2025-01-30 13:52:30.353 [INFO][6145] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3" Namespace="calico-system" Pod="calico-kube-controllers-8754b5b8-d48qg" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--8754b5b8--d48qg-eth0" Jan 30 13:52:30.392282 containerd[1673]: time="2025-01-30T13:52:30.391807229Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:52:30.392282 containerd[1673]: time="2025-01-30T13:52:30.391878931Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:52:30.392282 containerd[1673]: time="2025-01-30T13:52:30.391899331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:30.392282 containerd[1673]: time="2025-01-30T13:52:30.392143037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:52:30.421191 systemd[1]: Started cri-containerd-7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3.scope - libcontainer container 7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3. Jan 30 13:52:30.445145 systemd-networkd[1436]: cali0aba16c7e1d: Gained IPv6LL Jan 30 13:52:30.469193 containerd[1673]: time="2025-01-30T13:52:30.468938245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-8754b5b8-d48qg,Uid:5bbd649c-49c0-47c2-9aa3-6d2c890d8fb7,Namespace:calico-system,Attempt:0,} returns sandbox id \"7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3\"" Jan 30 13:52:30.472670 containerd[1673]: time="2025-01-30T13:52:30.472511729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:52:31.379432 kubelet[3113]: I0130 13:52:31.379377 3113 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6962ed73-d508-4340-81ff-7f3201a82a70" path="/var/lib/kubelet/pods/6962ed73-d508-4340-81ff-7f3201a82a70/volumes" Jan 30 13:52:31.405170 systemd-networkd[1436]: cali92595af894d: Gained IPv6LL Jan 30 13:52:33.137169 containerd[1673]: time="2025-01-30T13:52:33.137108277Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:33.139165 containerd[1673]: time="2025-01-30T13:52:33.139097124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:52:33.143340 containerd[1673]: time="2025-01-30T13:52:33.143277923Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:33.147336 containerd[1673]: time="2025-01-30T13:52:33.147282417Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:52:33.148422 containerd[1673]: time="2025-01-30T13:52:33.147907132Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.6753108s" Jan 30 13:52:33.148422 containerd[1673]: time="2025-01-30T13:52:33.147951233Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:52:33.167971 containerd[1673]: time="2025-01-30T13:52:33.167919003Z" level=info msg="CreateContainer within sandbox \"7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:52:33.204243 containerd[1673]: time="2025-01-30T13:52:33.204195057Z" level=info msg="CreateContainer within sandbox \"7bbcaa0ef2cdf2df5a93ca1597c35e104a062ee6253613cd129cce31e66f84d3\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c27ecf94229751cc3c7d0c0b5a6de3139aa3de0f2a041728341e716ed434ffc8\"" Jan 30 13:52:33.205109 containerd[1673]: time="2025-01-30T13:52:33.204938175Z" level=info msg="StartContainer for \"c27ecf94229751cc3c7d0c0b5a6de3139aa3de0f2a041728341e716ed434ffc8\"" Jan 30 13:52:33.239167 systemd[1]: Started cri-containerd-c27ecf94229751cc3c7d0c0b5a6de3139aa3de0f2a041728341e716ed434ffc8.scope - libcontainer container c27ecf94229751cc3c7d0c0b5a6de3139aa3de0f2a041728341e716ed434ffc8. Jan 30 13:52:33.284340 containerd[1673]: time="2025-01-30T13:52:33.284286643Z" level=info msg="StartContainer for \"c27ecf94229751cc3c7d0c0b5a6de3139aa3de0f2a041728341e716ed434ffc8\" returns successfully" Jan 30 13:52:33.820296 kubelet[3113]: I0130 13:52:33.819946 3113 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-8754b5b8-d48qg" podStartSLOduration=2.142885616 podStartE2EDuration="4.819928357s" podCreationTimestamp="2025-01-30 13:52:29 +0000 UTC" firstStartedPulling="2025-01-30 13:52:30.471822413 +0000 UTC m=+87.210759284" lastFinishedPulling="2025-01-30 13:52:33.148865154 +0000 UTC m=+89.887802025" observedRunningTime="2025-01-30 13:52:33.817606302 +0000 UTC m=+90.556543073" watchObservedRunningTime="2025-01-30 13:52:33.819928357 +0000 UTC m=+90.558865128" Jan 30 13:52:46.140284 systemd[1]: run-containerd-runc-k8s.io-ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c-runc.hasuY6.mount: Deactivated successfully. Jan 30 13:53:03.417575 containerd[1673]: time="2025-01-30T13:53:03.417506474Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.457 [WARNING][6388] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd", Pod:"calico-apiserver-5df8c6b8fc-vtx2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali681922ca3a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.458 [INFO][6388] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.458 [INFO][6388] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" iface="eth0" netns="" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.458 [INFO][6388] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.458 [INFO][6388] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.485 [INFO][6394] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.485 [INFO][6394] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.485 [INFO][6394] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.492 [WARNING][6394] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.492 [INFO][6394] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.493 [INFO][6394] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:03.495728 containerd[1673]: 2025-01-30 13:53:03.494 [INFO][6388] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.496634 containerd[1673]: time="2025-01-30T13:53:03.495783845Z" level=info msg="TearDown network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" successfully" Jan 30 13:53:03.496634 containerd[1673]: time="2025-01-30T13:53:03.495819546Z" level=info msg="StopPodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" returns successfully" Jan 30 13:53:03.496769 containerd[1673]: time="2025-01-30T13:53:03.496625766Z" level=info msg="RemovePodSandbox for \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:53:03.496769 containerd[1673]: time="2025-01-30T13:53:03.496668767Z" level=info msg="Forcibly stopping sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\"" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.543 [WARNING][6413] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"d6c294f8-6ad1-443a-b8e8-6d3c60b2eab7", ResourceVersion:"1025", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"2e4d4980ce3c0fb4612b718c06e77b9491474c759ac6ec7faacbc6fd835edecd", Pod:"calico-apiserver-5df8c6b8fc-vtx2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali681922ca3a2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.543 [INFO][6413] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.543 [INFO][6413] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" iface="eth0" netns="" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.543 [INFO][6413] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.543 [INFO][6413] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.569 [INFO][6419] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.569 [INFO][6419] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.569 [INFO][6419] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.576 [WARNING][6419] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.576 [INFO][6419] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" HandleID="k8s-pod-network.d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--vtx2w-eth0" Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.579 [INFO][6419] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:03.581416 containerd[1673]: 2025-01-30 13:53:03.580 [INFO][6413] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6" Jan 30 13:53:03.582302 containerd[1673]: time="2025-01-30T13:53:03.581454693Z" level=info msg="TearDown network for sandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" successfully" Jan 30 13:53:03.596696 containerd[1673]: time="2025-01-30T13:53:03.596518053Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:03.596900 containerd[1673]: time="2025-01-30T13:53:03.596746959Z" level=info msg="RemovePodSandbox \"d890a474e960a3c8aa76296a0301f4381c56e31fcfa11626e016d90ce559edd6\" returns successfully" Jan 30 13:53:03.597505 containerd[1673]: time="2025-01-30T13:53:03.597470476Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.633 [WARNING][6437] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c", Pod:"calico-apiserver-5df8c6b8fc-2m7hp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79ad5a2b51c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.633 [INFO][6437] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.633 [INFO][6437] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" iface="eth0" netns="" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.633 [INFO][6437] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.633 [INFO][6437] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.655 [INFO][6443] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.655 [INFO][6443] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.655 [INFO][6443] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.660 [WARNING][6443] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.661 [INFO][6443] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.662 [INFO][6443] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:03.664435 containerd[1673]: 2025-01-30 13:53:03.663 [INFO][6437] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.665722 containerd[1673]: time="2025-01-30T13:53:03.664493278Z" level=info msg="TearDown network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" successfully" Jan 30 13:53:03.665722 containerd[1673]: time="2025-01-30T13:53:03.664525179Z" level=info msg="StopPodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" returns successfully" Jan 30 13:53:03.665722 containerd[1673]: time="2025-01-30T13:53:03.665146294Z" level=info msg="RemovePodSandbox for \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:53:03.665722 containerd[1673]: time="2025-01-30T13:53:03.665257097Z" level=info msg="Forcibly stopping sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\"" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.699 [WARNING][6462] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0", GenerateName:"calico-apiserver-5df8c6b8fc-", Namespace:"calico-apiserver", SelfLink:"", UID:"7b2d705e-4b26-45d5-a9b8-0a56f69ef4b1", ResourceVersion:"1063", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5df8c6b8fc", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"9f3a6b76798480f5d34f4c24536d7a10540024a27500fffa341089818f7e3c9c", Pod:"calico-apiserver-5df8c6b8fc-2m7hp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.50.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali79ad5a2b51c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.699 [INFO][6462] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.699 [INFO][6462] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" iface="eth0" netns="" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.699 [INFO][6462] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.699 [INFO][6462] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.719 [INFO][6468] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.720 [INFO][6468] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.720 [INFO][6468] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.726 [WARNING][6468] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.727 [INFO][6468] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" HandleID="k8s-pod-network.599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--apiserver--5df8c6b8fc--2m7hp-eth0" Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.728 [INFO][6468] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:03.730783 containerd[1673]: 2025-01-30 13:53:03.729 [INFO][6462] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324" Jan 30 13:53:03.730783 containerd[1673]: time="2025-01-30T13:53:03.730737962Z" level=info msg="TearDown network for sandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" successfully" Jan 30 13:53:03.738963 containerd[1673]: time="2025-01-30T13:53:03.738906657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:03.739135 containerd[1673]: time="2025-01-30T13:53:03.739039260Z" level=info msg="RemovePodSandbox \"599624b84331f0a2bd9381a088ef9ebae8c44c691b99895cf082dd3b15ef0324\" returns successfully" Jan 30 13:53:03.739723 containerd[1673]: time="2025-01-30T13:53:03.739688476Z" level=info msg="StopPodSandbox for \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\"" Jan 30 13:53:03.739837 containerd[1673]: time="2025-01-30T13:53:03.739794278Z" level=info msg="TearDown network for sandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" successfully" Jan 30 13:53:03.739837 containerd[1673]: time="2025-01-30T13:53:03.739811179Z" level=info msg="StopPodSandbox for \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" returns successfully" Jan 30 13:53:03.740322 containerd[1673]: time="2025-01-30T13:53:03.740258689Z" level=info msg="RemovePodSandbox for \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\"" Jan 30 13:53:03.740322 containerd[1673]: time="2025-01-30T13:53:03.740291890Z" level=info msg="Forcibly stopping sandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\"" Jan 30 13:53:03.740453 containerd[1673]: time="2025-01-30T13:53:03.740355392Z" level=info msg="TearDown network for sandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" successfully" Jan 30 13:53:03.746058 containerd[1673]: time="2025-01-30T13:53:03.745995127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:03.746298 containerd[1673]: time="2025-01-30T13:53:03.746083829Z" level=info msg="RemovePodSandbox \"d40182059908ea875ff316c1db0e983f822731fb9a6a22c0fe9d37b21f1e6a8b\" returns successfully" Jan 30 13:53:03.746539 containerd[1673]: time="2025-01-30T13:53:03.746506639Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.825 [WARNING][6486] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.826 [INFO][6486] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.826 [INFO][6486] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" iface="eth0" netns="" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.826 [INFO][6486] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.826 [INFO][6486] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.889 [INFO][6495] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.890 [INFO][6495] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.890 [INFO][6495] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.898 [WARNING][6495] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.898 [INFO][6495] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.904 [INFO][6495] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:03.907079 containerd[1673]: 2025-01-30 13:53:03.906 [INFO][6486] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.907718 containerd[1673]: time="2025-01-30T13:53:03.907148979Z" level=info msg="TearDown network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" successfully" Jan 30 13:53:03.907718 containerd[1673]: time="2025-01-30T13:53:03.907184880Z" level=info msg="StopPodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" returns successfully" Jan 30 13:53:03.907830 containerd[1673]: time="2025-01-30T13:53:03.907774394Z" level=info msg="RemovePodSandbox for \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:53:03.907830 containerd[1673]: time="2025-01-30T13:53:03.907811995Z" level=info msg="Forcibly stopping sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\"" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.942 [WARNING][6513] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" WorkloadEndpoint="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.942 [INFO][6513] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.942 [INFO][6513] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" iface="eth0" netns="" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.943 [INFO][6513] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.943 [INFO][6513] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.962 [INFO][6519] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.963 [INFO][6519] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.963 [INFO][6519] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.970 [WARNING][6519] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.970 [INFO][6519] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" HandleID="k8s-pod-network.fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Workload="ci--4081.3.0--a--38674a3e2a-k8s-calico--kube--controllers--659c567d5c--h72ss-eth0" Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.972 [INFO][6519] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:03.974031 containerd[1673]: 2025-01-30 13:53:03.973 [INFO][6513] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca" Jan 30 13:53:03.974642 containerd[1673]: time="2025-01-30T13:53:03.974092179Z" level=info msg="TearDown network for sandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" successfully" Jan 30 13:53:03.980731 containerd[1673]: time="2025-01-30T13:53:03.980687837Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:03.980945 containerd[1673]: time="2025-01-30T13:53:03.980808140Z" level=info msg="RemovePodSandbox \"fb434c913dc88a71f1e754a759b2b7ce18b6a56a00f0b67cec744ace602ceaca\" returns successfully" Jan 30 13:53:03.981879 containerd[1673]: time="2025-01-30T13:53:03.981511656Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.029 [WARNING][6537] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1654d24a-276c-4733-ab3c-b2a324f91922", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee", Pod:"csi-node-driver-mcmv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieafc116ac8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.030 [INFO][6537] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.030 [INFO][6537] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" iface="eth0" netns="" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.030 [INFO][6537] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.030 [INFO][6537] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.051 [INFO][6543] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.051 [INFO][6543] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.051 [INFO][6543] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.056 [WARNING][6543] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.056 [INFO][6543] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.058 [INFO][6543] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:04.060111 containerd[1673]: 2025-01-30 13:53:04.059 [INFO][6537] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.060834 containerd[1673]: time="2025-01-30T13:53:04.060676949Z" level=info msg="TearDown network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" successfully" Jan 30 13:53:04.060834 containerd[1673]: time="2025-01-30T13:53:04.060708150Z" level=info msg="StopPodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" returns successfully" Jan 30 13:53:04.061361 containerd[1673]: time="2025-01-30T13:53:04.061314064Z" level=info msg="RemovePodSandbox for \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:53:04.061361 containerd[1673]: time="2025-01-30T13:53:04.061356465Z" level=info msg="Forcibly stopping sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\"" Jan 30 13:53:04.123343 systemd[1]: Started sshd@7-10.200.8.14:22-10.200.16.10:40382.service - OpenSSH per-connection server daemon (10.200.16.10:40382). Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.100 [WARNING][6561] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"1654d24a-276c-4733-ab3c-b2a324f91922", ResourceVersion:"1092", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"9a7b855f629ec62f86000ab63f13832349407ab010e50f3c38b17f91c34741ee", Pod:"csi-node-driver-mcmv9", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.50.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calieafc116ac8c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.100 [INFO][6561] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.100 [INFO][6561] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" iface="eth0" netns="" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.100 [INFO][6561] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.100 [INFO][6561] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.133 [INFO][6567] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.134 [INFO][6567] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.134 [INFO][6567] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.141 [WARNING][6567] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.141 [INFO][6567] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" HandleID="k8s-pod-network.4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Workload="ci--4081.3.0--a--38674a3e2a-k8s-csi--node--driver--mcmv9-eth0" Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.143 [INFO][6567] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:04.145368 containerd[1673]: 2025-01-30 13:53:04.144 [INFO][6561] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b" Jan 30 13:53:04.146056 containerd[1673]: time="2025-01-30T13:53:04.145430175Z" level=info msg="TearDown network for sandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" successfully" Jan 30 13:53:04.152368 containerd[1673]: time="2025-01-30T13:53:04.152298039Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:04.152512 containerd[1673]: time="2025-01-30T13:53:04.152393641Z" level=info msg="RemovePodSandbox \"4beb4e07197abf1fd106b699d1ed0af63679a8c6b7513499be6b8a2469fc913b\" returns successfully" Jan 30 13:53:04.153072 containerd[1673]: time="2025-01-30T13:53:04.153022756Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.188 [WARNING][6588] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7c2a0430-bfea-48a8-b9a0-8ea183a3114a", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1", Pod:"coredns-6f6b679f8f-z8997", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1b12102759", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.189 [INFO][6588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.189 [INFO][6588] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" iface="eth0" netns="" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.189 [INFO][6588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.189 [INFO][6588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.209 [INFO][6594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.210 [INFO][6594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.210 [INFO][6594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.215 [WARNING][6594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.215 [INFO][6594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.217 [INFO][6594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:04.219216 containerd[1673]: 2025-01-30 13:53:04.218 [INFO][6588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.220199 containerd[1673]: time="2025-01-30T13:53:04.219265640Z" level=info msg="TearDown network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" successfully" Jan 30 13:53:04.220199 containerd[1673]: time="2025-01-30T13:53:04.219298041Z" level=info msg="StopPodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" returns successfully" Jan 30 13:53:04.220382 containerd[1673]: time="2025-01-30T13:53:04.220349166Z" level=info msg="RemovePodSandbox for \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:53:04.220440 containerd[1673]: time="2025-01-30T13:53:04.220419867Z" level=info msg="Forcibly stopping sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\"" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.254 [WARNING][6613] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"7c2a0430-bfea-48a8-b9a0-8ea183a3114a", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"b873de482b66c84e03af01d3fa3be13397df91f16ba835354f716d10e76633d1", Pod:"coredns-6f6b679f8f-z8997", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie1b12102759", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.254 [INFO][6613] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.254 [INFO][6613] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" iface="eth0" netns="" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.254 [INFO][6613] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.254 [INFO][6613] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.273 [INFO][6619] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.273 [INFO][6619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.273 [INFO][6619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.280 [WARNING][6619] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.280 [INFO][6619] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" HandleID="k8s-pod-network.da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--z8997-eth0" Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.282 [INFO][6619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:04.284289 containerd[1673]: 2025-01-30 13:53:04.283 [INFO][6613] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3" Jan 30 13:53:04.284289 containerd[1673]: time="2025-01-30T13:53:04.284248193Z" level=info msg="TearDown network for sandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" successfully" Jan 30 13:53:04.290716 containerd[1673]: time="2025-01-30T13:53:04.290675047Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:04.290867 containerd[1673]: time="2025-01-30T13:53:04.290753549Z" level=info msg="RemovePodSandbox \"da565e48f0d91e5797ef69ee4653440f529e7550c8d6ba35b37ce11573bed7f3\" returns successfully" Jan 30 13:53:04.291422 containerd[1673]: time="2025-01-30T13:53:04.291389364Z" level=info msg="StopPodSandbox for \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\"" Jan 30 13:53:04.291531 containerd[1673]: time="2025-01-30T13:53:04.291499666Z" level=info msg="TearDown network for sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" successfully" Jan 30 13:53:04.291531 containerd[1673]: time="2025-01-30T13:53:04.291517867Z" level=info msg="StopPodSandbox for \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" returns successfully" Jan 30 13:53:04.291949 containerd[1673]: time="2025-01-30T13:53:04.291921877Z" level=info msg="RemovePodSandbox for \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\"" Jan 30 13:53:04.292063 containerd[1673]: time="2025-01-30T13:53:04.292034379Z" level=info msg="Forcibly stopping sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\"" Jan 30 13:53:04.292139 containerd[1673]: time="2025-01-30T13:53:04.292116481Z" level=info msg="TearDown network for sandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" successfully" Jan 30 13:53:04.298164 containerd[1673]: time="2025-01-30T13:53:04.298125625Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:04.298238 containerd[1673]: time="2025-01-30T13:53:04.298190126Z" level=info msg="RemovePodSandbox \"4874066ede25fe2962b536a8b1821396a846992e699290aaebe44323da44f12e\" returns successfully" Jan 30 13:53:04.298674 containerd[1673]: time="2025-01-30T13:53:04.298643037Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.337 [WARNING][6637] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7fbcb71-4682-4a9b-9734-5a668b2754b3", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31", Pod:"coredns-6f6b679f8f-4g4nh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aba16c7e1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.337 [INFO][6637] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.337 [INFO][6637] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" iface="eth0" netns="" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.337 [INFO][6637] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.337 [INFO][6637] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.362 [INFO][6643] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.362 [INFO][6643] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.363 [INFO][6643] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.368 [WARNING][6643] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.368 [INFO][6643] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.371 [INFO][6643] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:04.373172 containerd[1673]: 2025-01-30 13:53:04.372 [INFO][6637] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.373172 containerd[1673]: time="2025-01-30T13:53:04.373149918Z" level=info msg="TearDown network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" successfully" Jan 30 13:53:04.373172 containerd[1673]: time="2025-01-30T13:53:04.373184019Z" level=info msg="StopPodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" returns successfully" Jan 30 13:53:04.374329 containerd[1673]: time="2025-01-30T13:53:04.373746532Z" level=info msg="RemovePodSandbox for \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:53:04.374329 containerd[1673]: time="2025-01-30T13:53:04.373786133Z" level=info msg="Forcibly stopping sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\"" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.407 [WARNING][6661] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0", GenerateName:"coredns-6f6b679f8f-", Namespace:"kube-system", SelfLink:"", UID:"f7fbcb71-4682-4a9b-9734-5a668b2754b3", ResourceVersion:"1101", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 51, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"6f6b679f8f", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-38674a3e2a", ContainerID:"03237c2bdc0b1e8750b8bd9cc855b4bef3d9fb7b26d2a23c8ca99450c963ce31", Pod:"coredns-6f6b679f8f-4g4nh", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.50.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali0aba16c7e1d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.407 [INFO][6661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.407 [INFO][6661] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" iface="eth0" netns="" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.408 [INFO][6661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.408 [INFO][6661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.427 [INFO][6667] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.427 [INFO][6667] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.427 [INFO][6667] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.434 [WARNING][6667] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.434 [INFO][6667] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" HandleID="k8s-pod-network.ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Workload="ci--4081.3.0--a--38674a3e2a-k8s-coredns--6f6b679f8f--4g4nh-eth0" Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.436 [INFO][6667] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:53:04.438238 containerd[1673]: 2025-01-30 13:53:04.437 [INFO][6661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934" Jan 30 13:53:04.439334 containerd[1673]: time="2025-01-30T13:53:04.438296276Z" level=info msg="TearDown network for sandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" successfully" Jan 30 13:53:04.797825 sshd[6572]: Accepted publickey for core from 10.200.16.10 port 40382 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:04.799491 sshd[6572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:04.805383 systemd-logind[1646]: New session 10 of user core. Jan 30 13:53:04.810200 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:53:04.886724 containerd[1673]: time="2025-01-30T13:53:04.886675294Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:53:04.886908 containerd[1673]: time="2025-01-30T13:53:04.886787096Z" level=info msg="RemovePodSandbox \"ca2e6e3b8caf09a5efa549e0c2d078873395dc181029b25abdc1be95ddcaf934\" returns successfully" Jan 30 13:53:05.339511 sshd[6572]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:05.343423 systemd[1]: sshd@7-10.200.8.14:22-10.200.16.10:40382.service: Deactivated successfully. Jan 30 13:53:05.346150 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:53:05.348046 systemd-logind[1646]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:53:05.349202 systemd-logind[1646]: Removed session 10. Jan 30 13:53:10.466363 systemd[1]: Started sshd@8-10.200.8.14:22-10.200.16.10:49336.service - OpenSSH per-connection server daemon (10.200.16.10:49336). Jan 30 13:53:11.136577 sshd[6689]: Accepted publickey for core from 10.200.16.10 port 49336 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:11.138304 sshd[6689]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:11.143352 systemd-logind[1646]: New session 11 of user core. Jan 30 13:53:11.148175 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:53:11.671697 sshd[6689]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:11.675342 systemd[1]: sshd@8-10.200.8.14:22-10.200.16.10:49336.service: Deactivated successfully. Jan 30 13:53:11.677898 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:53:11.679591 systemd-logind[1646]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:53:11.681142 systemd-logind[1646]: Removed session 11. Jan 30 13:53:16.795331 systemd[1]: Started sshd@9-10.200.8.14:22-10.200.16.10:51272.service - OpenSSH per-connection server daemon (10.200.16.10:51272). Jan 30 13:53:17.465017 sshd[6725]: Accepted publickey for core from 10.200.16.10 port 51272 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:17.466704 sshd[6725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:17.470716 systemd-logind[1646]: New session 12 of user core. Jan 30 13:53:17.476173 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:53:17.997896 sshd[6725]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:18.001706 systemd[1]: sshd@9-10.200.8.14:22-10.200.16.10:51272.service: Deactivated successfully. Jan 30 13:53:18.004074 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:53:18.005916 systemd-logind[1646]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:53:18.006881 systemd-logind[1646]: Removed session 12. Jan 30 13:53:18.120357 systemd[1]: Started sshd@10-10.200.8.14:22-10.200.16.10:51276.service - OpenSSH per-connection server daemon (10.200.16.10:51276). Jan 30 13:53:18.787477 sshd[6739]: Accepted publickey for core from 10.200.16.10 port 51276 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:18.789131 sshd[6739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:18.793284 systemd-logind[1646]: New session 13 of user core. Jan 30 13:53:18.799186 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:53:19.354192 sshd[6739]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:19.358430 systemd[1]: sshd@10-10.200.8.14:22-10.200.16.10:51276.service: Deactivated successfully. Jan 30 13:53:19.361326 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:53:19.362907 systemd-logind[1646]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:53:19.364304 systemd-logind[1646]: Removed session 13. Jan 30 13:53:19.477367 systemd[1]: Started sshd@11-10.200.8.14:22-10.200.16.10:51286.service - OpenSSH per-connection server daemon (10.200.16.10:51286). Jan 30 13:53:20.144462 sshd[6749]: Accepted publickey for core from 10.200.16.10 port 51286 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:20.146459 sshd[6749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:20.151380 systemd-logind[1646]: New session 14 of user core. Jan 30 13:53:20.156186 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:53:20.682428 sshd[6749]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:20.686250 systemd[1]: sshd@11-10.200.8.14:22-10.200.16.10:51286.service: Deactivated successfully. Jan 30 13:53:20.688571 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:53:20.690147 systemd-logind[1646]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:53:20.691352 systemd-logind[1646]: Removed session 14. Jan 30 13:53:25.811592 systemd[1]: Started sshd@12-10.200.8.14:22-10.200.16.10:51294.service - OpenSSH per-connection server daemon (10.200.16.10:51294). Jan 30 13:53:26.477494 sshd[6765]: Accepted publickey for core from 10.200.16.10 port 51294 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:26.479618 sshd[6765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:26.486355 systemd-logind[1646]: New session 15 of user core. Jan 30 13:53:26.489208 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:53:27.015371 sshd[6765]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:27.018443 systemd[1]: sshd@12-10.200.8.14:22-10.200.16.10:51294.service: Deactivated successfully. Jan 30 13:53:27.020922 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:53:27.022860 systemd-logind[1646]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:53:27.024096 systemd-logind[1646]: Removed session 15. Jan 30 13:53:32.135850 systemd[1]: Started sshd@13-10.200.8.14:22-10.200.16.10:41064.service - OpenSSH per-connection server daemon (10.200.16.10:41064). Jan 30 13:53:32.812061 sshd[6818]: Accepted publickey for core from 10.200.16.10 port 41064 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:32.813890 sshd[6818]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:32.819293 systemd-logind[1646]: New session 16 of user core. Jan 30 13:53:32.824226 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:53:33.348663 sshd[6818]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:33.353461 systemd[1]: sshd@13-10.200.8.14:22-10.200.16.10:41064.service: Deactivated successfully. Jan 30 13:53:33.355895 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:53:33.356666 systemd-logind[1646]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:53:33.357731 systemd-logind[1646]: Removed session 16. Jan 30 13:53:38.473349 systemd[1]: Started sshd@14-10.200.8.14:22-10.200.16.10:55806.service - OpenSSH per-connection server daemon (10.200.16.10:55806). Jan 30 13:53:39.147337 sshd[6831]: Accepted publickey for core from 10.200.16.10 port 55806 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:39.149732 sshd[6831]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:39.156405 systemd-logind[1646]: New session 17 of user core. Jan 30 13:53:39.162235 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:53:39.683127 sshd[6831]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:39.686405 systemd[1]: sshd@14-10.200.8.14:22-10.200.16.10:55806.service: Deactivated successfully. Jan 30 13:53:39.688830 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:53:39.690635 systemd-logind[1646]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:53:39.691849 systemd-logind[1646]: Removed session 17. Jan 30 13:53:39.806327 systemd[1]: Started sshd@15-10.200.8.14:22-10.200.16.10:55818.service - OpenSSH per-connection server daemon (10.200.16.10:55818). Jan 30 13:53:40.475186 sshd[6846]: Accepted publickey for core from 10.200.16.10 port 55818 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:40.476816 sshd[6846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:40.480956 systemd-logind[1646]: New session 18 of user core. Jan 30 13:53:40.485163 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:53:41.073916 sshd[6846]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:41.077913 systemd[1]: sshd@15-10.200.8.14:22-10.200.16.10:55818.service: Deactivated successfully. Jan 30 13:53:41.082232 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:53:41.084785 systemd-logind[1646]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:53:41.085818 systemd-logind[1646]: Removed session 18. Jan 30 13:53:41.200354 systemd[1]: Started sshd@16-10.200.8.14:22-10.200.16.10:55834.service - OpenSSH per-connection server daemon (10.200.16.10:55834). Jan 30 13:53:41.870615 sshd[6857]: Accepted publickey for core from 10.200.16.10 port 55834 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:41.872304 sshd[6857]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:41.877380 systemd-logind[1646]: New session 19 of user core. Jan 30 13:53:41.886197 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:53:44.248960 sshd[6857]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:44.252649 systemd[1]: sshd@16-10.200.8.14:22-10.200.16.10:55834.service: Deactivated successfully. Jan 30 13:53:44.254839 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:53:44.256859 systemd-logind[1646]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:53:44.258110 systemd-logind[1646]: Removed session 19. Jan 30 13:53:44.372371 systemd[1]: Started sshd@17-10.200.8.14:22-10.200.16.10:55838.service - OpenSSH per-connection server daemon (10.200.16.10:55838). Jan 30 13:53:45.041788 sshd[6881]: Accepted publickey for core from 10.200.16.10 port 55838 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:45.043411 sshd[6881]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:45.048162 systemd-logind[1646]: New session 20 of user core. Jan 30 13:53:45.052176 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:53:45.686271 sshd[6881]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:45.689387 systemd[1]: sshd@17-10.200.8.14:22-10.200.16.10:55838.service: Deactivated successfully. Jan 30 13:53:45.691939 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:53:45.693880 systemd-logind[1646]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:53:45.695099 systemd-logind[1646]: Removed session 20. Jan 30 13:53:45.810311 systemd[1]: Started sshd@18-10.200.8.14:22-10.200.16.10:55840.service - OpenSSH per-connection server daemon (10.200.16.10:55840). Jan 30 13:53:46.140305 systemd[1]: run-containerd-runc-k8s.io-ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c-runc.iElyHm.mount: Deactivated successfully. Jan 30 13:53:46.494538 sshd[6892]: Accepted publickey for core from 10.200.16.10 port 55840 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:46.496444 sshd[6892]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:46.501496 systemd-logind[1646]: New session 21 of user core. Jan 30 13:53:46.505177 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:53:47.032666 sshd[6892]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:47.036857 systemd[1]: sshd@18-10.200.8.14:22-10.200.16.10:55840.service: Deactivated successfully. Jan 30 13:53:47.039144 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:53:47.040959 systemd-logind[1646]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:53:47.042167 systemd-logind[1646]: Removed session 21. Jan 30 13:53:52.152341 systemd[1]: Started sshd@19-10.200.8.14:22-10.200.16.10:57324.service - OpenSSH per-connection server daemon (10.200.16.10:57324). Jan 30 13:53:52.821538 sshd[6928]: Accepted publickey for core from 10.200.16.10 port 57324 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:52.823141 sshd[6928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:52.827921 systemd-logind[1646]: New session 22 of user core. Jan 30 13:53:52.834171 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:53:53.404539 sshd[6928]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:53.409328 systemd[1]: sshd@19-10.200.8.14:22-10.200.16.10:57324.service: Deactivated successfully. Jan 30 13:53:53.411904 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:53:53.412811 systemd-logind[1646]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:53:53.414074 systemd-logind[1646]: Removed session 22. Jan 30 13:53:58.541351 systemd[1]: Started sshd@20-10.200.8.14:22-10.200.16.10:44558.service - OpenSSH per-connection server daemon (10.200.16.10:44558). Jan 30 13:53:59.228086 sshd[6953]: Accepted publickey for core from 10.200.16.10 port 44558 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:53:59.230205 sshd[6953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:53:59.237554 systemd-logind[1646]: New session 23 of user core. Jan 30 13:53:59.245169 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:53:59.780505 sshd[6953]: pam_unix(sshd:session): session closed for user core Jan 30 13:53:59.784438 systemd[1]: sshd@20-10.200.8.14:22-10.200.16.10:44558.service: Deactivated successfully. Jan 30 13:53:59.786698 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:53:59.787618 systemd-logind[1646]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:53:59.788855 systemd-logind[1646]: Removed session 23. Jan 30 13:54:04.895610 systemd[1]: Started sshd@21-10.200.8.14:22-10.200.16.10:44562.service - OpenSSH per-connection server daemon (10.200.16.10:44562). Jan 30 13:54:05.568923 sshd[6992]: Accepted publickey for core from 10.200.16.10 port 44562 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:54:05.570876 sshd[6992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:05.576667 systemd-logind[1646]: New session 24 of user core. Jan 30 13:54:05.584178 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:54:06.100273 sshd[6992]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:06.104371 systemd[1]: sshd@21-10.200.8.14:22-10.200.16.10:44562.service: Deactivated successfully. Jan 30 13:54:06.106516 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:54:06.107469 systemd-logind[1646]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:54:06.108865 systemd-logind[1646]: Removed session 24. Jan 30 13:54:11.229341 systemd[1]: Started sshd@22-10.200.8.14:22-10.200.16.10:58694.service - OpenSSH per-connection server daemon (10.200.16.10:58694). Jan 30 13:54:11.912318 sshd[7009]: Accepted publickey for core from 10.200.16.10 port 58694 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:54:11.913865 sshd[7009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:11.918058 systemd-logind[1646]: New session 25 of user core. Jan 30 13:54:11.922185 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:54:12.455494 sshd[7009]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:12.459079 systemd[1]: sshd@22-10.200.8.14:22-10.200.16.10:58694.service: Deactivated successfully. Jan 30 13:54:12.462305 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:54:12.464323 systemd-logind[1646]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:54:12.465558 systemd-logind[1646]: Removed session 25. Jan 30 13:54:16.142057 systemd[1]: run-containerd-runc-k8s.io-ec7fce7e5c53058b3c0014d3dc3e9e268d1b6b120d70a3a693403c9d6f6c352c-runc.knTfmv.mount: Deactivated successfully. Jan 30 13:54:17.577346 systemd[1]: Started sshd@23-10.200.8.14:22-10.200.16.10:44308.service - OpenSSH per-connection server daemon (10.200.16.10:44308). Jan 30 13:54:18.244474 sshd[7044]: Accepted publickey for core from 10.200.16.10 port 44308 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:54:18.246392 sshd[7044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:18.251709 systemd-logind[1646]: New session 26 of user core. Jan 30 13:54:18.255195 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:54:18.782694 sshd[7044]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:18.785935 systemd[1]: sshd@23-10.200.8.14:22-10.200.16.10:44308.service: Deactivated successfully. Jan 30 13:54:18.788315 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:54:18.789829 systemd-logind[1646]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:54:18.791252 systemd-logind[1646]: Removed session 26. Jan 30 13:54:23.908318 systemd[1]: Started sshd@24-10.200.8.14:22-10.200.16.10:44310.service - OpenSSH per-connection server daemon (10.200.16.10:44310). Jan 30 13:54:24.576034 sshd[7058]: Accepted publickey for core from 10.200.16.10 port 44310 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:54:24.577885 sshd[7058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:24.582060 systemd-logind[1646]: New session 27 of user core. Jan 30 13:54:24.587174 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:54:25.106865 sshd[7058]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:25.110110 systemd[1]: sshd@24-10.200.8.14:22-10.200.16.10:44310.service: Deactivated successfully. Jan 30 13:54:25.112727 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:54:25.114275 systemd-logind[1646]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:54:25.115449 systemd-logind[1646]: Removed session 27. Jan 30 13:54:30.242011 systemd[1]: Started sshd@25-10.200.8.14:22-10.200.16.10:48940.service - OpenSSH per-connection server daemon (10.200.16.10:48940). Jan 30 13:54:30.922155 sshd[7103]: Accepted publickey for core from 10.200.16.10 port 48940 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:54:30.923788 sshd[7103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:30.928190 systemd-logind[1646]: New session 28 of user core. Jan 30 13:54:30.935198 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:54:31.455566 sshd[7103]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:31.459755 systemd[1]: sshd@25-10.200.8.14:22-10.200.16.10:48940.service: Deactivated successfully. Jan 30 13:54:31.462082 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:54:31.462923 systemd-logind[1646]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:54:31.464099 systemd-logind[1646]: Removed session 28. Jan 30 13:54:36.583341 systemd[1]: Started sshd@26-10.200.8.14:22-10.200.16.10:44578.service - OpenSSH per-connection server daemon (10.200.16.10:44578). Jan 30 13:54:37.251116 sshd[7122]: Accepted publickey for core from 10.200.16.10 port 44578 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:54:37.252744 sshd[7122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:54:37.259636 systemd-logind[1646]: New session 29 of user core. Jan 30 13:54:37.262210 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 30 13:54:37.783274 sshd[7122]: pam_unix(sshd:session): session closed for user core Jan 30 13:54:37.788112 systemd[1]: sshd@26-10.200.8.14:22-10.200.16.10:44578.service: Deactivated successfully. Jan 30 13:54:37.791182 systemd[1]: session-29.scope: Deactivated successfully. Jan 30 13:54:37.792308 systemd-logind[1646]: Session 29 logged out. Waiting for processes to exit. Jan 30 13:54:37.793409 systemd-logind[1646]: Removed session 29.