Jan 17 12:14:55.061720 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:14:55.061760 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:14:55.061776 kernel: BIOS-provided physical RAM map: Jan 17 12:14:55.061788 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:14:55.061798 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 17 12:14:55.061809 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 17 12:14:55.061823 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 17 12:14:55.061838 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 17 12:14:55.061849 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 17 12:14:55.061860 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 17 12:14:55.061871 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 17 12:14:55.061883 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 17 12:14:55.061894 kernel: printk: bootconsole [earlyser0] enabled Jan 17 12:14:55.061907 kernel: NX (Execute Disable) protection: active Jan 17 12:14:55.061925 kernel: APIC: Static calls initialized Jan 17 12:14:55.061938 kernel: efi: EFI v2.7 by Microsoft Jan 17 12:14:55.061951 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 17 12:14:55.061963 kernel: SMBIOS 3.1.0 present. Jan 17 12:14:55.061976 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 17 12:14:55.061989 kernel: Hypervisor detected: Microsoft Hyper-V Jan 17 12:14:55.062002 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 17 12:14:55.062015 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 17 12:14:55.062027 kernel: Hyper-V: Nested features: 0x1e0101 Jan 17 12:14:55.062040 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 17 12:14:55.062057 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 17 12:14:55.062070 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 12:14:55.062083 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 12:14:55.062097 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 17 12:14:55.062110 kernel: tsc: Detected 2593.907 MHz processor Jan 17 12:14:55.062122 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:14:55.062134 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:14:55.062147 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 17 12:14:55.062160 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:14:55.062176 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:14:55.062188 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 17 12:14:55.062200 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 17 12:14:55.062212 kernel: Using GB pages for direct mapping Jan 17 12:14:55.062224 kernel: Secure boot disabled Jan 17 12:14:55.062236 kernel: ACPI: Early table checksum verification disabled Jan 17 12:14:55.062248 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 17 12:14:55.063205 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063218 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063230 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 17 12:14:55.063238 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 17 12:14:55.063248 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063271 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063282 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063293 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063303 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063311 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063320 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063330 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 17 12:14:55.063337 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 17 12:14:55.063348 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 17 12:14:55.063355 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 17 12:14:55.063368 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 17 12:14:55.063376 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 17 12:14:55.063384 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 17 12:14:55.063394 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 17 12:14:55.063401 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 17 12:14:55.063412 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 17 12:14:55.063419 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:14:55.063429 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:14:55.063437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 17 12:14:55.063448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 17 12:14:55.063458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 17 12:14:55.063465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 17 12:14:55.063474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 17 12:14:55.063483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 17 12:14:55.063491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 17 12:14:55.063501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 17 12:14:55.063509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 17 12:14:55.063519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 17 12:14:55.063529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 17 12:14:55.063538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 17 12:14:55.063547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 17 12:14:55.063555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 17 12:14:55.063565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 17 12:14:55.063573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 17 12:14:55.063583 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 17 12:14:55.063598 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 17 12:14:55.063609 kernel: Zone ranges: Jan 17 12:14:55.063622 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:14:55.063633 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:14:55.063642 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 12:14:55.063650 kernel: Movable zone start for each node Jan 17 12:14:55.063660 kernel: Early memory node ranges Jan 17 12:14:55.063668 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:14:55.063678 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 17 12:14:55.063686 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 17 12:14:55.063693 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 12:14:55.063706 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 17 12:14:55.063713 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:14:55.063723 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:14:55.063731 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 17 12:14:55.063739 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 17 12:14:55.063749 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 17 12:14:55.063757 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:14:55.063765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:14:55.063775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:14:55.063785 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 17 12:14:55.063795 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:14:55.063803 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 17 12:14:55.063814 kernel: Booting paravirtualized kernel on Hyper-V Jan 17 12:14:55.063822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:14:55.063830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:14:55.063840 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:14:55.063847 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:14:55.063857 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:14:55.063867 kernel: Hyper-V: PV spinlocks enabled Jan 17 12:14:55.063878 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:14:55.063887 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:14:55.063896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:14:55.063905 kernel: random: crng init done Jan 17 12:14:55.063912 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:14:55.063923 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:14:55.063930 kernel: Fallback order for Node 0: 0 Jan 17 12:14:55.063943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 17 12:14:55.063959 kernel: Policy zone: Normal Jan 17 12:14:55.063971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:14:55.063981 kernel: software IO TLB: area num 2. Jan 17 12:14:55.063996 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 310124K reserved, 0K cma-reserved) Jan 17 12:14:55.064007 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:14:55.064018 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:14:55.064028 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:14:55.064038 kernel: Dynamic Preempt: voluntary Jan 17 12:14:55.064048 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:14:55.064058 kernel: rcu: RCU event tracing is enabled. Jan 17 12:14:55.064071 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:14:55.064080 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:14:55.064088 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:14:55.064096 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:14:55.064104 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:14:55.064114 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:14:55.064122 kernel: Using NULL legacy PIC Jan 17 12:14:55.064130 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 17 12:14:55.064138 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:14:55.064146 kernel: Console: colour dummy device 80x25 Jan 17 12:14:55.064154 kernel: printk: console [tty1] enabled Jan 17 12:14:55.064162 kernel: printk: console [ttyS0] enabled Jan 17 12:14:55.064169 kernel: printk: bootconsole [earlyser0] disabled Jan 17 12:14:55.064178 kernel: ACPI: Core revision 20230628 Jan 17 12:14:55.064188 kernel: Failed to register legacy timer interrupt Jan 17 12:14:55.064199 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:14:55.064209 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 12:14:55.064217 kernel: Hyper-V: Using IPI hypercalls Jan 17 12:14:55.064228 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 17 12:14:55.064236 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 17 12:14:55.064246 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 17 12:14:55.064262 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 17 12:14:55.064273 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 17 12:14:55.064281 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 17 12:14:55.064295 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 17 12:14:55.064303 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:14:55.064314 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:14:55.064322 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:14:55.064332 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:14:55.064341 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:14:55.064349 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:14:55.064357 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 12:14:55.064368 kernel: RETBleed: Vulnerable Jan 17 12:14:55.064378 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:14:55.064389 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:14:55.064396 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:14:55.064407 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:14:55.064416 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:14:55.064424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:14:55.064435 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:14:55.064442 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 12:14:55.064454 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 12:14:55.064461 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 12:14:55.064478 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:14:55.064491 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 12:14:55.064501 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 12:14:55.064510 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 12:14:55.064521 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 17 12:14:55.064531 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:14:55.064541 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:14:55.064549 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:14:55.064559 kernel: landlock: Up and running. Jan 17 12:14:55.064568 kernel: SELinux: Initializing. Jan 17 12:14:55.064577 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.064587 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.064595 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 12:14:55.064608 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:14:55.064616 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:14:55.064627 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:14:55.064635 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 12:14:55.064645 kernel: signal: max sigframe size: 3632 Jan 17 12:14:55.064654 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:14:55.064663 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:14:55.064673 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:14:55.064681 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:14:55.064694 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:14:55.064702 kernel: .... node #0, CPUs: #1 Jan 17 12:14:55.064714 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 17 12:14:55.064723 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:14:55.064734 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:14:55.064742 kernel: smpboot: Max logical packages: 1 Jan 17 12:14:55.064752 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 17 12:14:55.064761 kernel: devtmpfs: initialized Jan 17 12:14:55.064773 kernel: x86/mm: Memory block size: 128MB Jan 17 12:14:55.064782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 17 12:14:55.064791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:14:55.068654 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:14:55.068672 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:14:55.068688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:14:55.068703 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:14:55.068718 kernel: audit: type=2000 audit(1737116093.027:1): state=initialized audit_enabled=0 res=1 Jan 17 12:14:55.068732 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:14:55.068753 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:14:55.068768 kernel: cpuidle: using governor menu Jan 17 12:14:55.068783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:14:55.068798 kernel: dca service started, version 1.12.1 Jan 17 12:14:55.068812 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 17 12:14:55.068827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:14:55.068842 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:14:55.068857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:14:55.068872 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:14:55.068889 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:14:55.068904 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:14:55.068919 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:14:55.068933 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:14:55.068948 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:14:55.068963 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:14:55.068978 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:14:55.068993 kernel: ACPI: Interpreter enabled Jan 17 12:14:55.069007 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:14:55.069025 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:14:55.069040 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:14:55.069054 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:14:55.069069 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 17 12:14:55.069083 kernel: iommu: Default domain type: Translated Jan 17 12:14:55.069098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:14:55.069112 kernel: efivars: Registered efivars operations Jan 17 12:14:55.069127 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:14:55.069141 kernel: PCI: System does not support PCI Jan 17 12:14:55.069158 kernel: vgaarb: loaded Jan 17 12:14:55.069173 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 17 12:14:55.069188 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:14:55.069202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:14:55.069217 kernel: pnp: PnP ACPI init Jan 17 12:14:55.069231 kernel: pnp: PnP ACPI: found 3 devices Jan 17 12:14:55.069246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:14:55.069270 kernel: NET: Registered PF_INET protocol family Jan 17 12:14:55.069285 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:14:55.069304 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:14:55.069319 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:14:55.069334 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:14:55.069349 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:14:55.069363 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:14:55.069378 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.069392 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.069407 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:14:55.069422 kernel: NET: Registered PF_XDP protocol family Jan 17 12:14:55.069439 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:14:55.069454 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:14:55.069468 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 17 12:14:55.069483 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:14:55.069498 kernel: Initialise system trusted keyrings Jan 17 12:14:55.069512 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:14:55.069527 kernel: Key type asymmetric registered Jan 17 12:14:55.069541 kernel: Asymmetric key parser 'x509' registered Jan 17 12:14:55.069555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:14:55.069573 kernel: io scheduler mq-deadline registered Jan 17 12:14:55.069587 kernel: io scheduler kyber registered Jan 17 12:14:55.069602 kernel: io scheduler bfq registered Jan 17 12:14:55.069617 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:14:55.069631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:14:55.069646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:14:55.069661 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:14:55.069675 kernel: i8042: PNP: No PS/2 controller found. Jan 17 12:14:55.069878 kernel: rtc_cmos 00:02: registered as rtc0 Jan 17 12:14:55.070027 kernel: rtc_cmos 00:02: setting system clock to 2025-01-17T12:14:54 UTC (1737116094) Jan 17 12:14:55.070151 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 17 12:14:55.070170 kernel: intel_pstate: CPU model not supported Jan 17 12:14:55.070184 kernel: efifb: probing for efifb Jan 17 12:14:55.070198 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 12:14:55.070212 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 12:14:55.070225 kernel: efifb: scrolling: redraw Jan 17 12:14:55.070243 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 12:14:55.070317 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:14:55.070331 kernel: fb0: EFI VGA frame buffer device Jan 17 12:14:55.070348 kernel: pstore: Using crash dump compression: deflate Jan 17 12:14:55.070361 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:14:55.070375 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:14:55.070389 kernel: Segment Routing with IPv6 Jan 17 12:14:55.070403 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:14:55.070418 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:14:55.070433 kernel: Key type dns_resolver registered Jan 17 12:14:55.070453 kernel: IPI shorthand broadcast: enabled Jan 17 12:14:55.070467 kernel: sched_clock: Marking stable (873002900, 42888900)->(1119679200, -203787400) Jan 17 12:14:55.070481 kernel: registered taskstats version 1 Jan 17 12:14:55.070496 kernel: Loading compiled-in X.509 certificates Jan 17 12:14:55.070510 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:14:55.070525 kernel: Key type .fscrypt registered Jan 17 12:14:55.070538 kernel: Key type fscrypt-provisioning registered Jan 17 12:14:55.070552 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:14:55.070569 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:14:55.070584 kernel: ima: No architecture policies found Jan 17 12:14:55.070598 kernel: clk: Disabling unused clocks Jan 17 12:14:55.070611 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:14:55.070625 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:14:55.070639 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:14:55.070653 kernel: Run /init as init process Jan 17 12:14:55.070667 kernel: with arguments: Jan 17 12:14:55.070681 kernel: /init Jan 17 12:14:55.070697 kernel: with environment: Jan 17 12:14:55.070711 kernel: HOME=/ Jan 17 12:14:55.070724 kernel: TERM=linux Jan 17 12:14:55.070737 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:14:55.070754 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:14:55.070771 systemd[1]: Detected virtualization microsoft. Jan 17 12:14:55.070786 systemd[1]: Detected architecture x86-64. Jan 17 12:14:55.070801 systemd[1]: Running in initrd. Jan 17 12:14:55.070819 systemd[1]: No hostname configured, using default hostname. Jan 17 12:14:55.070833 systemd[1]: Hostname set to . Jan 17 12:14:55.070848 systemd[1]: Initializing machine ID from random generator. Jan 17 12:14:55.070864 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:14:55.070878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:14:55.070893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:14:55.070909 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:14:55.070923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:14:55.070941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:14:55.070957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:14:55.070974 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:14:55.070990 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:14:55.071005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:14:55.071020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:14:55.071035 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:14:55.071054 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:14:55.071070 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:14:55.071085 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:14:55.071100 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:14:55.071116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:14:55.071132 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:14:55.071147 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:14:55.071163 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:14:55.071180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:14:55.071198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:14:55.071213 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:14:55.071229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:14:55.071245 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:14:55.071272 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:14:55.071288 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:14:55.071304 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:14:55.071319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:14:55.071337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:55.071379 systemd-journald[176]: Collecting audit messages is disabled. Jan 17 12:14:55.071416 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:14:55.071431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:14:55.071453 systemd-journald[176]: Journal started Jan 17 12:14:55.071502 systemd-journald[176]: Runtime Journal (/run/log/journal/8307dbddd4374dbc870e530011332acd) is 8.0M, max 158.8M, 150.8M free. Jan 17 12:14:55.076356 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:14:55.077101 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:14:55.085541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:14:55.093840 systemd-modules-load[177]: Inserted module 'overlay' Jan 17 12:14:55.098444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:14:55.106351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:55.115477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:14:55.131571 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:14:55.139542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:14:55.144600 kernel: Bridge firewalling registered Jan 17 12:14:55.143461 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 17 12:14:55.150490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:14:55.152480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:14:55.155457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:14:55.170560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:14:55.183890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:14:55.187495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:14:55.197610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:14:55.203157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:14:55.212448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:14:55.233272 dracut-cmdline[214]: dracut-dracut-053 Jan 17 12:14:55.237488 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:14:55.263241 systemd-resolved[209]: Positive Trust Anchors: Jan 17 12:14:55.263271 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:14:55.263327 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:14:55.288004 systemd-resolved[209]: Defaulting to hostname 'linux'. Jan 17 12:14:55.289424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:14:55.292232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:14:55.321277 kernel: SCSI subsystem initialized Jan 17 12:14:55.331284 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:14:55.342283 kernel: iscsi: registered transport (tcp) Jan 17 12:14:55.363733 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:14:55.363828 kernel: QLogic iSCSI HBA Driver Jan 17 12:14:55.399859 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:14:55.407451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:14:55.435048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:14:55.435141 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:14:55.438263 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:14:55.479292 kernel: raid6: avx512x4 gen() 18648 MB/s Jan 17 12:14:55.498274 kernel: raid6: avx512x2 gen() 18637 MB/s Jan 17 12:14:55.517269 kernel: raid6: avx512x1 gen() 18517 MB/s Jan 17 12:14:55.536270 kernel: raid6: avx2x4 gen() 18361 MB/s Jan 17 12:14:55.555270 kernel: raid6: avx2x2 gen() 18394 MB/s Jan 17 12:14:55.574886 kernel: raid6: avx2x1 gen() 13740 MB/s Jan 17 12:14:55.574918 kernel: raid6: using algorithm avx512x4 gen() 18648 MB/s Jan 17 12:14:55.595869 kernel: raid6: .... xor() 7002 MB/s, rmw enabled Jan 17 12:14:55.595904 kernel: raid6: using avx512x2 recovery algorithm Jan 17 12:14:55.618281 kernel: xor: automatically using best checksumming function avx Jan 17 12:14:55.765292 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:14:55.775356 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:14:55.787450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:14:55.800659 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 17 12:14:55.805156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:14:55.820528 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:14:55.837207 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 17 12:14:55.867790 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:14:55.876615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:14:55.920767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:14:55.932461 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:14:55.960711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:14:55.971853 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:14:55.975768 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:14:55.982006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:14:56.001497 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:14:56.022432 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:14:56.027072 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:14:56.045859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:14:56.054959 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:14:56.054989 kernel: AES CTR mode by8 optimization enabled Jan 17 12:14:56.046098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:14:56.064337 kernel: hv_vmbus: Vmbus version:5.2 Jan 17 12:14:56.061013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:14:56.066985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:14:56.067321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:56.077491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:56.092959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:56.103332 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 12:14:56.103739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:14:56.105410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:56.122127 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:14:56.122192 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 12:14:56.122966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:56.148385 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 12:14:56.151045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:56.154495 kernel: PTP clock support registered Jan 17 12:14:56.166628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:14:56.192056 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 12:14:56.192094 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 12:14:56.192114 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 12:14:56.192134 kernel: hv_vmbus: registering driver hv_utils Jan 17 12:14:56.192153 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 12:14:56.192172 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 12:14:56.192203 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 12:14:56.441586 systemd-resolved[209]: Clock change detected. Flushing caches. Jan 17 12:14:56.458178 kernel: scsi host1: storvsc_host_t Jan 17 12:14:56.458548 kernel: scsi host0: storvsc_host_t Jan 17 12:14:56.458751 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 12:14:56.458816 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:14:56.467167 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 12:14:56.483789 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 12:14:56.484281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:14:56.502227 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 12:14:56.502307 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 12:14:56.509069 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 12:14:56.511281 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:14:56.511306 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 12:14:56.527023 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 12:14:56.541400 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:14:56.541703 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:14:56.542621 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 12:14:56.542805 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 12:14:56.542972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:56.542992 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:14:56.623142 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: VF slot 1 added Jan 17 12:14:56.634103 kernel: hv_vmbus: registering driver hv_pci Jan 17 12:14:56.634173 kernel: hv_pci 58f59e6c-ee92-41b4-aaef-ca04617e9a57: PCI VMBus probing: Using version 0x10004 Jan 17 12:14:56.678080 kernel: hv_pci 58f59e6c-ee92-41b4-aaef-ca04617e9a57: PCI host bridge to bus ee92:00 Jan 17 12:14:56.678296 kernel: pci_bus ee92:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 17 12:14:56.678494 kernel: pci_bus ee92:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 12:14:56.678665 kernel: pci ee92:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 17 12:14:56.678882 kernel: pci ee92:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 12:14:56.679064 kernel: pci ee92:00:02.0: enabling Extended Tags Jan 17 12:14:56.679232 kernel: pci ee92:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ee92:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 17 12:14:56.679402 kernel: pci_bus ee92:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 12:14:56.679553 kernel: pci ee92:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 12:14:56.842561 kernel: mlx5_core ee92:00:02.0: enabling device (0000 -> 0002) Jan 17 12:14:57.076528 kernel: mlx5_core ee92:00:02.0: firmware version: 14.30.5000 Jan 17 12:14:57.076751 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: VF registering: eth1 Jan 17 12:14:57.077484 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Jan 17 12:14:57.077513 kernel: mlx5_core ee92:00:02.0 eth1: joined to eth0 Jan 17 12:14:57.077711 kernel: mlx5_core ee92:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 12:14:57.047488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 12:14:57.089820 kernel: mlx5_core ee92:00:02.0 enP61074s1: renamed from eth1 Jan 17 12:14:57.104234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:14:57.121192 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 12:14:57.165805 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (444) Jan 17 12:14:57.186525 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 12:14:57.193068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 12:14:57.204944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:14:57.216783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:57.225784 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:58.233072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:58.233366 disk-uuid[598]: The operation has completed successfully. Jan 17 12:14:58.341061 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:14:58.341180 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:14:58.353954 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:14:58.360456 sh[684]: Success Jan 17 12:14:58.392024 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:14:58.578407 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:14:58.598907 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:14:58.601349 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:14:58.623782 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:14:58.623831 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:14:58.629139 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:14:58.631903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:14:58.634428 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:14:58.921709 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:14:58.923780 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:14:58.937028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:14:58.942938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:14:58.964068 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:14:58.964127 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:14:58.964153 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:14:58.983786 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:14:58.999785 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:14:58.999271 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:14:59.009733 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:14:59.020968 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:14:59.040593 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:14:59.050034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:14:59.071975 systemd-networkd[868]: lo: Link UP Jan 17 12:14:59.071983 systemd-networkd[868]: lo: Gained carrier Jan 17 12:14:59.074076 systemd-networkd[868]: Enumeration completed Jan 17 12:14:59.074331 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:14:59.078852 systemd[1]: Reached target network.target - Network. Jan 17 12:14:59.080002 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:14:59.080005 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:14:59.143815 kernel: mlx5_core ee92:00:02.0 enP61074s1: Link up Jan 17 12:14:59.180900 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: Data path switched to VF: enP61074s1 Jan 17 12:14:59.181484 systemd-networkd[868]: enP61074s1: Link UP Jan 17 12:14:59.181608 systemd-networkd[868]: eth0: Link UP Jan 17 12:14:59.181836 systemd-networkd[868]: eth0: Gained carrier Jan 17 12:14:59.181851 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:14:59.192907 systemd-networkd[868]: enP61074s1: Gained carrier Jan 17 12:14:59.219831 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 12:14:59.912733 ignition[837]: Ignition 2.19.0 Jan 17 12:14:59.912747 ignition[837]: Stage: fetch-offline Jan 17 12:14:59.915268 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:14:59.912812 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:14:59.912823 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:14:59.912954 ignition[837]: parsed url from cmdline: "" Jan 17 12:14:59.912960 ignition[837]: no config URL provided Jan 17 12:14:59.912968 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:14:59.930942 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:14:59.912977 ignition[837]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:14:59.912986 ignition[837]: failed to fetch config: resource requires networking Jan 17 12:14:59.913501 ignition[837]: Ignition finished successfully Jan 17 12:14:59.947184 ignition[877]: Ignition 2.19.0 Jan 17 12:14:59.947196 ignition[877]: Stage: fetch Jan 17 12:14:59.947448 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:14:59.947462 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:14:59.947584 ignition[877]: parsed url from cmdline: "" Jan 17 12:14:59.947588 ignition[877]: no config URL provided Jan 17 12:14:59.947596 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:14:59.947605 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:14:59.947629 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 12:15:00.040756 ignition[877]: GET result: OK Jan 17 12:15:00.040924 ignition[877]: config has been read from IMDS userdata Jan 17 12:15:00.040968 ignition[877]: parsing config with SHA512: 6c5d8d6a91f3dc3568efaf635956af761e71192461618373084d93433f8c1c1c927b8c6afeadf28b1c0bf249f91ec1e6463f0fbbdbd97e0eeb7593466e0c4275 Jan 17 12:15:00.050790 unknown[877]: fetched base config from "system" Jan 17 12:15:00.051711 ignition[877]: fetch: fetch complete Jan 17 12:15:00.050813 unknown[877]: fetched base config from "system" Jan 17 12:15:00.051720 ignition[877]: fetch: fetch passed Jan 17 12:15:00.050821 unknown[877]: fetched user config from "azure" Jan 17 12:15:00.053260 ignition[877]: Ignition finished successfully Jan 17 12:15:00.061371 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:15:00.071044 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:15:00.089999 ignition[884]: Ignition 2.19.0 Jan 17 12:15:00.090013 ignition[884]: Stage: kargs Jan 17 12:15:00.090262 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:00.090276 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:00.091226 ignition[884]: kargs: kargs passed Jan 17 12:15:00.091287 ignition[884]: Ignition finished successfully Jan 17 12:15:00.102685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:15:00.112033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:15:00.129037 ignition[890]: Ignition 2.19.0 Jan 17 12:15:00.129051 ignition[890]: Stage: disks Jan 17 12:15:00.131451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:15:00.129318 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:00.134291 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:15:00.129335 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:00.130319 ignition[890]: disks: disks passed Jan 17 12:15:00.130375 ignition[890]: Ignition finished successfully Jan 17 12:15:00.138000 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:15:00.138381 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:15:00.138809 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:15:00.139223 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:15:00.168008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:15:00.235599 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 12:15:00.239959 systemd-networkd[868]: enP61074s1: Gained IPv6LL Jan 17 12:15:00.244653 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:15:00.257304 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:15:00.352790 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:15:00.353728 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:15:00.355540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:15:00.392901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:15:00.398405 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:15:00.405966 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:15:00.411988 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (909) Jan 17 12:15:00.418808 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:15:00.419984 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:15:00.431289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:15:00.431320 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:15:00.431332 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:15:00.421023 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:15:00.436197 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:15:00.442116 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:15:00.450947 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:15:00.996666 coreos-metadata[911]: Jan 17 12:15:00.996 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:15:01.002512 coreos-metadata[911]: Jan 17 12:15:01.002 INFO Fetch successful Jan 17 12:15:01.005061 coreos-metadata[911]: Jan 17 12:15:01.002 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:15:01.011528 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:15:01.010450 systemd-networkd[868]: eth0: Gained IPv6LL Jan 17 12:15:01.019088 coreos-metadata[911]: Jan 17 12:15:01.014 INFO Fetch successful Jan 17 12:15:01.019088 coreos-metadata[911]: Jan 17 12:15:01.014 INFO wrote hostname ci-4081.3.0-a-bcafed7e46 to /sysroot/etc/hostname Jan 17 12:15:01.016145 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:15:01.047374 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:15:01.054945 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:15:01.076680 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:15:02.159230 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:15:02.167886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:15:02.174396 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:15:02.184892 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:15:02.184462 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:15:02.216438 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:15:02.222758 ignition[1027]: INFO : Ignition 2.19.0 Jan 17 12:15:02.222758 ignition[1027]: INFO : Stage: mount Jan 17 12:15:02.226419 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:02.226419 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:02.232559 ignition[1027]: INFO : mount: mount passed Jan 17 12:15:02.234590 ignition[1027]: INFO : Ignition finished successfully Jan 17 12:15:02.234236 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:15:02.243899 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:15:02.252927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:15:02.272757 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1039) Jan 17 12:15:02.272854 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:15:02.275879 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:15:02.279070 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:15:02.284790 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:15:02.286659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:15:02.317853 ignition[1055]: INFO : Ignition 2.19.0 Jan 17 12:15:02.317853 ignition[1055]: INFO : Stage: files Jan 17 12:15:02.322106 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:02.322106 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:02.322106 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:15:02.322106 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:15:02.322106 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:15:02.384322 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:15:02.388684 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:15:02.388684 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:15:02.384901 unknown[1055]: wrote ssh authorized keys file for user: core Jan 17 12:15:02.415036 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:15:02.419483 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:15:02.419483 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:15:02.419483 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:15:02.488026 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:02.717826 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:02.717826 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:02.717826 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:15:03.146672 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:15:03.492293 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:03.492293 ignition[1055]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: files passed Jan 17 12:15:03.504388 ignition[1055]: INFO : Ignition finished successfully Jan 17 12:15:03.497923 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:15:03.564007 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:15:03.570028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:15:03.577217 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:15:03.578388 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:15:03.591807 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:15:03.591807 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:15:03.599991 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:15:03.601588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:15:03.610606 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:15:03.623076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:15:03.657201 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:15:03.657331 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:15:03.663503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:15:03.671476 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:15:03.676589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:15:03.689085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:15:03.704101 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:15:03.712970 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:15:03.726218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:15:03.727690 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:15:03.728650 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:15:03.729103 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:15:03.729226 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:15:03.730357 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:15:03.730815 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:15:03.731221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:15:03.731641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:15:03.732072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:15:03.732487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:15:03.732909 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:15:03.733322 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:15:03.733728 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:15:03.734692 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:15:03.735014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:15:03.735167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:15:03.735823 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:15:03.736236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:15:03.849635 ignition[1108]: INFO : Ignition 2.19.0 Jan 17 12:15:03.849635 ignition[1108]: INFO : Stage: umount Jan 17 12:15:03.849635 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:03.849635 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:03.849635 ignition[1108]: INFO : umount: umount passed Jan 17 12:15:03.849635 ignition[1108]: INFO : Ignition finished successfully Jan 17 12:15:03.736593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:15:03.750201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:15:03.776865 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:15:03.777046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:15:03.792376 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:15:03.792538 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:15:03.798061 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:15:03.798216 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:15:03.802935 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:15:03.803081 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:15:03.818852 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:15:03.821295 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:15:03.821468 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:15:03.832023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:15:03.835582 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:15:03.835785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:15:03.838816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:15:03.838975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:15:03.855039 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:15:03.855168 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:15:03.860523 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:15:03.860633 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:15:03.928741 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:15:03.931170 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:15:03.936085 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:15:03.936150 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:15:03.941018 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:15:03.941064 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:15:03.947598 systemd[1]: Stopped target network.target - Network. Jan 17 12:15:03.954061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:15:03.954129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:15:03.962007 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:15:03.964070 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:15:03.970229 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:15:03.973472 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:15:03.975578 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:15:03.977852 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:15:03.977898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:15:03.985896 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:15:03.987980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:15:03.994966 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:15:03.995033 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:15:03.999670 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:15:03.999725 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:15:04.009673 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:15:04.014402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:15:04.020431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:15:04.021004 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:15:04.021101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:15:04.025020 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:15:04.025086 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:15:04.038876 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 17 12:15:04.042458 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:15:04.042581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:15:04.047070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:15:04.047104 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:15:04.063882 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:15:04.068483 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:15:04.068568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:15:04.077169 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:15:04.083573 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:15:04.083721 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:15:04.099142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:15:04.099283 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:15:04.106313 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:15:04.106388 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:15:04.111702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:15:04.111791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:15:04.123552 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:15:04.126112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:15:04.127825 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:15:04.127911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:15:04.159864 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: Data path switched from VF: enP61074s1 Jan 17 12:15:04.128260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:15:04.128295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:15:04.128653 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:15:04.128696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:15:04.129559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:15:04.129614 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:15:04.130894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:15:04.130937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:15:04.145158 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:15:04.152780 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:15:04.152863 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:15:04.162857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:15:04.162910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:15:04.166319 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:15:04.166427 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:15:04.213958 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:15:04.214095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:15:04.219014 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:15:04.235054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:15:04.289377 systemd[1]: Switching root. Jan 17 12:15:04.325126 systemd-journald[176]: Journal stopped Jan 17 12:14:55.061720 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Jan 17 10:39:07 -00 2025 Jan 17 12:14:55.061760 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:14:55.061776 kernel: BIOS-provided physical RAM map: Jan 17 12:14:55.061788 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 17 12:14:55.061798 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 17 12:14:55.061809 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 17 12:14:55.061823 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 17 12:14:55.061838 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 17 12:14:55.061849 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 17 12:14:55.061860 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 17 12:14:55.061871 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 17 12:14:55.061883 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 17 12:14:55.061894 kernel: printk: bootconsole [earlyser0] enabled Jan 17 12:14:55.061907 kernel: NX (Execute Disable) protection: active Jan 17 12:14:55.061925 kernel: APIC: Static calls initialized Jan 17 12:14:55.061938 kernel: efi: EFI v2.7 by Microsoft Jan 17 12:14:55.061951 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 17 12:14:55.061963 kernel: SMBIOS 3.1.0 present. Jan 17 12:14:55.061976 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 17 12:14:55.061989 kernel: Hypervisor detected: Microsoft Hyper-V Jan 17 12:14:55.062002 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 17 12:14:55.062015 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 17 12:14:55.062027 kernel: Hyper-V: Nested features: 0x1e0101 Jan 17 12:14:55.062040 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 17 12:14:55.062057 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 17 12:14:55.062070 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 12:14:55.062083 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 17 12:14:55.062097 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 17 12:14:55.062110 kernel: tsc: Detected 2593.907 MHz processor Jan 17 12:14:55.062122 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 17 12:14:55.062134 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 17 12:14:55.062147 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 17 12:14:55.062160 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 17 12:14:55.062176 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 17 12:14:55.062188 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 17 12:14:55.062200 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 17 12:14:55.062212 kernel: Using GB pages for direct mapping Jan 17 12:14:55.062224 kernel: Secure boot disabled Jan 17 12:14:55.062236 kernel: ACPI: Early table checksum verification disabled Jan 17 12:14:55.062248 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 17 12:14:55.063205 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063218 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063230 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 17 12:14:55.063238 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 17 12:14:55.063248 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063271 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063282 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063293 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063303 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063311 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063320 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 17 12:14:55.063330 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 17 12:14:55.063337 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 17 12:14:55.063348 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 17 12:14:55.063355 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 17 12:14:55.063368 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 17 12:14:55.063376 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 17 12:14:55.063384 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 17 12:14:55.063394 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 17 12:14:55.063401 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 17 12:14:55.063412 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 17 12:14:55.063419 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 17 12:14:55.063429 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 17 12:14:55.063437 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 17 12:14:55.063448 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 17 12:14:55.063458 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 17 12:14:55.063465 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 17 12:14:55.063474 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 17 12:14:55.063483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 17 12:14:55.063491 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 17 12:14:55.063501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 17 12:14:55.063509 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 17 12:14:55.063519 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 17 12:14:55.063529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 17 12:14:55.063538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 17 12:14:55.063547 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 17 12:14:55.063555 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 17 12:14:55.063565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 17 12:14:55.063573 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 17 12:14:55.063583 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 17 12:14:55.063598 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 17 12:14:55.063609 kernel: Zone ranges: Jan 17 12:14:55.063622 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 17 12:14:55.063633 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 17 12:14:55.063642 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 12:14:55.063650 kernel: Movable zone start for each node Jan 17 12:14:55.063660 kernel: Early memory node ranges Jan 17 12:14:55.063668 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 17 12:14:55.063678 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 17 12:14:55.063686 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 17 12:14:55.063693 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 17 12:14:55.063706 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 17 12:14:55.063713 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 17 12:14:55.063723 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 17 12:14:55.063731 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 17 12:14:55.063739 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 17 12:14:55.063749 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 17 12:14:55.063757 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 17 12:14:55.063765 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 17 12:14:55.063775 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 17 12:14:55.063785 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 17 12:14:55.063795 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 17 12:14:55.063803 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 17 12:14:55.063814 kernel: Booting paravirtualized kernel on Hyper-V Jan 17 12:14:55.063822 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 17 12:14:55.063830 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 17 12:14:55.063840 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 17 12:14:55.063847 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 17 12:14:55.063857 kernel: pcpu-alloc: [0] 0 1 Jan 17 12:14:55.063867 kernel: Hyper-V: PV spinlocks enabled Jan 17 12:14:55.063878 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 17 12:14:55.063887 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:14:55.063896 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 17 12:14:55.063905 kernel: random: crng init done Jan 17 12:14:55.063912 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 17 12:14:55.063923 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 17 12:14:55.063930 kernel: Fallback order for Node 0: 0 Jan 17 12:14:55.063943 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 17 12:14:55.063959 kernel: Policy zone: Normal Jan 17 12:14:55.063971 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 17 12:14:55.063981 kernel: software IO TLB: area num 2. Jan 17 12:14:55.063996 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42848K init, 2344K bss, 310124K reserved, 0K cma-reserved) Jan 17 12:14:55.064007 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 17 12:14:55.064018 kernel: ftrace: allocating 37918 entries in 149 pages Jan 17 12:14:55.064028 kernel: ftrace: allocated 149 pages with 4 groups Jan 17 12:14:55.064038 kernel: Dynamic Preempt: voluntary Jan 17 12:14:55.064048 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 17 12:14:55.064058 kernel: rcu: RCU event tracing is enabled. Jan 17 12:14:55.064071 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 17 12:14:55.064080 kernel: Trampoline variant of Tasks RCU enabled. Jan 17 12:14:55.064088 kernel: Rude variant of Tasks RCU enabled. Jan 17 12:14:55.064096 kernel: Tracing variant of Tasks RCU enabled. Jan 17 12:14:55.064104 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 17 12:14:55.064114 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 17 12:14:55.064122 kernel: Using NULL legacy PIC Jan 17 12:14:55.064130 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 17 12:14:55.064138 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 17 12:14:55.064146 kernel: Console: colour dummy device 80x25 Jan 17 12:14:55.064154 kernel: printk: console [tty1] enabled Jan 17 12:14:55.064162 kernel: printk: console [ttyS0] enabled Jan 17 12:14:55.064169 kernel: printk: bootconsole [earlyser0] disabled Jan 17 12:14:55.064178 kernel: ACPI: Core revision 20230628 Jan 17 12:14:55.064188 kernel: Failed to register legacy timer interrupt Jan 17 12:14:55.064199 kernel: APIC: Switch to symmetric I/O mode setup Jan 17 12:14:55.064209 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 17 12:14:55.064217 kernel: Hyper-V: Using IPI hypercalls Jan 17 12:14:55.064228 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 17 12:14:55.064236 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 17 12:14:55.064246 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 17 12:14:55.064262 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 17 12:14:55.064273 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 17 12:14:55.064281 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 17 12:14:55.064295 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) Jan 17 12:14:55.064303 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 17 12:14:55.064314 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 17 12:14:55.064322 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 17 12:14:55.064332 kernel: Spectre V2 : Mitigation: Retpolines Jan 17 12:14:55.064341 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 17 12:14:55.064349 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 17 12:14:55.064357 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 17 12:14:55.064368 kernel: RETBleed: Vulnerable Jan 17 12:14:55.064378 kernel: Speculative Store Bypass: Vulnerable Jan 17 12:14:55.064389 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:14:55.064396 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 17 12:14:55.064407 kernel: GDS: Unknown: Dependent on hypervisor status Jan 17 12:14:55.064416 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 17 12:14:55.064424 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 17 12:14:55.064435 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 17 12:14:55.064442 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 17 12:14:55.064454 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 17 12:14:55.064461 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 17 12:14:55.064478 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 17 12:14:55.064491 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 17 12:14:55.064501 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 17 12:14:55.064510 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 17 12:14:55.064521 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 17 12:14:55.064531 kernel: Freeing SMP alternatives memory: 32K Jan 17 12:14:55.064541 kernel: pid_max: default: 32768 minimum: 301 Jan 17 12:14:55.064549 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 17 12:14:55.064559 kernel: landlock: Up and running. Jan 17 12:14:55.064568 kernel: SELinux: Initializing. Jan 17 12:14:55.064577 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.064587 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.064595 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 17 12:14:55.064608 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:14:55.064616 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:14:55.064627 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 17 12:14:55.064635 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 17 12:14:55.064645 kernel: signal: max sigframe size: 3632 Jan 17 12:14:55.064654 kernel: rcu: Hierarchical SRCU implementation. Jan 17 12:14:55.064663 kernel: rcu: Max phase no-delay instances is 400. Jan 17 12:14:55.064673 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 17 12:14:55.064681 kernel: smp: Bringing up secondary CPUs ... Jan 17 12:14:55.064694 kernel: smpboot: x86: Booting SMP configuration: Jan 17 12:14:55.064702 kernel: .... node #0, CPUs: #1 Jan 17 12:14:55.064714 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 17 12:14:55.064723 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 17 12:14:55.064734 kernel: smp: Brought up 1 node, 2 CPUs Jan 17 12:14:55.064742 kernel: smpboot: Max logical packages: 1 Jan 17 12:14:55.064752 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 17 12:14:55.064761 kernel: devtmpfs: initialized Jan 17 12:14:55.064773 kernel: x86/mm: Memory block size: 128MB Jan 17 12:14:55.064782 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 17 12:14:55.064791 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 17 12:14:55.068654 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 17 12:14:55.068672 kernel: pinctrl core: initialized pinctrl subsystem Jan 17 12:14:55.068688 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 17 12:14:55.068703 kernel: audit: initializing netlink subsys (disabled) Jan 17 12:14:55.068718 kernel: audit: type=2000 audit(1737116093.027:1): state=initialized audit_enabled=0 res=1 Jan 17 12:14:55.068732 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 17 12:14:55.068753 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 17 12:14:55.068768 kernel: cpuidle: using governor menu Jan 17 12:14:55.068783 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 17 12:14:55.068798 kernel: dca service started, version 1.12.1 Jan 17 12:14:55.068812 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 17 12:14:55.068827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 17 12:14:55.068842 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 17 12:14:55.068857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 17 12:14:55.068872 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 17 12:14:55.068889 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 17 12:14:55.068904 kernel: ACPI: Added _OSI(Module Device) Jan 17 12:14:55.068919 kernel: ACPI: Added _OSI(Processor Device) Jan 17 12:14:55.068933 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 17 12:14:55.068948 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 17 12:14:55.068963 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 17 12:14:55.068978 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 17 12:14:55.068993 kernel: ACPI: Interpreter enabled Jan 17 12:14:55.069007 kernel: ACPI: PM: (supports S0 S5) Jan 17 12:14:55.069025 kernel: ACPI: Using IOAPIC for interrupt routing Jan 17 12:14:55.069040 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 17 12:14:55.069054 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 17 12:14:55.069069 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 17 12:14:55.069083 kernel: iommu: Default domain type: Translated Jan 17 12:14:55.069098 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 17 12:14:55.069112 kernel: efivars: Registered efivars operations Jan 17 12:14:55.069127 kernel: PCI: Using ACPI for IRQ routing Jan 17 12:14:55.069141 kernel: PCI: System does not support PCI Jan 17 12:14:55.069158 kernel: vgaarb: loaded Jan 17 12:14:55.069173 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 17 12:14:55.069188 kernel: VFS: Disk quotas dquot_6.6.0 Jan 17 12:14:55.069202 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 17 12:14:55.069217 kernel: pnp: PnP ACPI init Jan 17 12:14:55.069231 kernel: pnp: PnP ACPI: found 3 devices Jan 17 12:14:55.069246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 17 12:14:55.069270 kernel: NET: Registered PF_INET protocol family Jan 17 12:14:55.069285 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 17 12:14:55.069304 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 17 12:14:55.069319 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 17 12:14:55.069334 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 17 12:14:55.069349 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 17 12:14:55.069363 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 17 12:14:55.069378 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.069392 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 17 12:14:55.069407 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 17 12:14:55.069422 kernel: NET: Registered PF_XDP protocol family Jan 17 12:14:55.069439 kernel: PCI: CLS 0 bytes, default 64 Jan 17 12:14:55.069454 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 17 12:14:55.069468 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 17 12:14:55.069483 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 17 12:14:55.069498 kernel: Initialise system trusted keyrings Jan 17 12:14:55.069512 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 17 12:14:55.069527 kernel: Key type asymmetric registered Jan 17 12:14:55.069541 kernel: Asymmetric key parser 'x509' registered Jan 17 12:14:55.069555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 17 12:14:55.069573 kernel: io scheduler mq-deadline registered Jan 17 12:14:55.069587 kernel: io scheduler kyber registered Jan 17 12:14:55.069602 kernel: io scheduler bfq registered Jan 17 12:14:55.069617 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 17 12:14:55.069631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 17 12:14:55.069646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 17 12:14:55.069661 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 17 12:14:55.069675 kernel: i8042: PNP: No PS/2 controller found. Jan 17 12:14:55.069878 kernel: rtc_cmos 00:02: registered as rtc0 Jan 17 12:14:55.070027 kernel: rtc_cmos 00:02: setting system clock to 2025-01-17T12:14:54 UTC (1737116094) Jan 17 12:14:55.070151 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 17 12:14:55.070170 kernel: intel_pstate: CPU model not supported Jan 17 12:14:55.070184 kernel: efifb: probing for efifb Jan 17 12:14:55.070198 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 17 12:14:55.070212 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 17 12:14:55.070225 kernel: efifb: scrolling: redraw Jan 17 12:14:55.070243 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 17 12:14:55.070317 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:14:55.070331 kernel: fb0: EFI VGA frame buffer device Jan 17 12:14:55.070348 kernel: pstore: Using crash dump compression: deflate Jan 17 12:14:55.070361 kernel: pstore: Registered efi_pstore as persistent store backend Jan 17 12:14:55.070375 kernel: NET: Registered PF_INET6 protocol family Jan 17 12:14:55.070389 kernel: Segment Routing with IPv6 Jan 17 12:14:55.070403 kernel: In-situ OAM (IOAM) with IPv6 Jan 17 12:14:55.070418 kernel: NET: Registered PF_PACKET protocol family Jan 17 12:14:55.070433 kernel: Key type dns_resolver registered Jan 17 12:14:55.070453 kernel: IPI shorthand broadcast: enabled Jan 17 12:14:55.070467 kernel: sched_clock: Marking stable (873002900, 42888900)->(1119679200, -203787400) Jan 17 12:14:55.070481 kernel: registered taskstats version 1 Jan 17 12:14:55.070496 kernel: Loading compiled-in X.509 certificates Jan 17 12:14:55.070510 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 6baa290b0089ed5c4c5f7248306af816ac8c7f80' Jan 17 12:14:55.070525 kernel: Key type .fscrypt registered Jan 17 12:14:55.070538 kernel: Key type fscrypt-provisioning registered Jan 17 12:14:55.070552 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 17 12:14:55.070569 kernel: ima: Allocated hash algorithm: sha1 Jan 17 12:14:55.070584 kernel: ima: No architecture policies found Jan 17 12:14:55.070598 kernel: clk: Disabling unused clocks Jan 17 12:14:55.070611 kernel: Freeing unused kernel image (initmem) memory: 42848K Jan 17 12:14:55.070625 kernel: Write protecting the kernel read-only data: 36864k Jan 17 12:14:55.070639 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 17 12:14:55.070653 kernel: Run /init as init process Jan 17 12:14:55.070667 kernel: with arguments: Jan 17 12:14:55.070681 kernel: /init Jan 17 12:14:55.070697 kernel: with environment: Jan 17 12:14:55.070711 kernel: HOME=/ Jan 17 12:14:55.070724 kernel: TERM=linux Jan 17 12:14:55.070737 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 17 12:14:55.070754 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:14:55.070771 systemd[1]: Detected virtualization microsoft. Jan 17 12:14:55.070786 systemd[1]: Detected architecture x86-64. Jan 17 12:14:55.070801 systemd[1]: Running in initrd. Jan 17 12:14:55.070819 systemd[1]: No hostname configured, using default hostname. Jan 17 12:14:55.070833 systemd[1]: Hostname set to . Jan 17 12:14:55.070848 systemd[1]: Initializing machine ID from random generator. Jan 17 12:14:55.070864 systemd[1]: Queued start job for default target initrd.target. Jan 17 12:14:55.070878 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:14:55.070893 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:14:55.070909 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 17 12:14:55.070923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:14:55.070941 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 17 12:14:55.070957 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 17 12:14:55.070974 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 17 12:14:55.070990 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 17 12:14:55.071005 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:14:55.071020 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:14:55.071035 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:14:55.071054 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:14:55.071070 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:14:55.071085 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:14:55.071100 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:14:55.071116 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:14:55.071132 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:14:55.071147 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:14:55.071163 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:14:55.071180 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:14:55.071198 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:14:55.071213 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:14:55.071229 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 17 12:14:55.071245 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:14:55.071272 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 17 12:14:55.071288 systemd[1]: Starting systemd-fsck-usr.service... Jan 17 12:14:55.071304 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:14:55.071319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:14:55.071337 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:55.071379 systemd-journald[176]: Collecting audit messages is disabled. Jan 17 12:14:55.071416 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 17 12:14:55.071431 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:14:55.071453 systemd-journald[176]: Journal started Jan 17 12:14:55.071502 systemd-journald[176]: Runtime Journal (/run/log/journal/8307dbddd4374dbc870e530011332acd) is 8.0M, max 158.8M, 150.8M free. Jan 17 12:14:55.076356 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:14:55.077101 systemd[1]: Finished systemd-fsck-usr.service. Jan 17 12:14:55.085541 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:14:55.093840 systemd-modules-load[177]: Inserted module 'overlay' Jan 17 12:14:55.098444 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:14:55.106351 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:55.115477 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:14:55.131571 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:14:55.139542 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 17 12:14:55.144600 kernel: Bridge firewalling registered Jan 17 12:14:55.143461 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 17 12:14:55.150490 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:14:55.152480 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:14:55.155457 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:14:55.170560 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:14:55.183890 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:14:55.187495 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:14:55.197610 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:14:55.203157 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:14:55.212448 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 17 12:14:55.233272 dracut-cmdline[214]: dracut-dracut-053 Jan 17 12:14:55.237488 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=bf1e0d81a0170850ab02d370c1a7c7a3f5983c980b3730f748240a3bda2dbb2e Jan 17 12:14:55.263241 systemd-resolved[209]: Positive Trust Anchors: Jan 17 12:14:55.263271 systemd-resolved[209]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:14:55.263327 systemd-resolved[209]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:14:55.288004 systemd-resolved[209]: Defaulting to hostname 'linux'. Jan 17 12:14:55.289424 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:14:55.292232 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:14:55.321277 kernel: SCSI subsystem initialized Jan 17 12:14:55.331284 kernel: Loading iSCSI transport class v2.0-870. Jan 17 12:14:55.342283 kernel: iscsi: registered transport (tcp) Jan 17 12:14:55.363733 kernel: iscsi: registered transport (qla4xxx) Jan 17 12:14:55.363828 kernel: QLogic iSCSI HBA Driver Jan 17 12:14:55.399859 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 17 12:14:55.407451 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 17 12:14:55.435048 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 17 12:14:55.435141 kernel: device-mapper: uevent: version 1.0.3 Jan 17 12:14:55.438263 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 17 12:14:55.479292 kernel: raid6: avx512x4 gen() 18648 MB/s Jan 17 12:14:55.498274 kernel: raid6: avx512x2 gen() 18637 MB/s Jan 17 12:14:55.517269 kernel: raid6: avx512x1 gen() 18517 MB/s Jan 17 12:14:55.536270 kernel: raid6: avx2x4 gen() 18361 MB/s Jan 17 12:14:55.555270 kernel: raid6: avx2x2 gen() 18394 MB/s Jan 17 12:14:55.574886 kernel: raid6: avx2x1 gen() 13740 MB/s Jan 17 12:14:55.574918 kernel: raid6: using algorithm avx512x4 gen() 18648 MB/s Jan 17 12:14:55.595869 kernel: raid6: .... xor() 7002 MB/s, rmw enabled Jan 17 12:14:55.595904 kernel: raid6: using avx512x2 recovery algorithm Jan 17 12:14:55.618281 kernel: xor: automatically using best checksumming function avx Jan 17 12:14:55.765292 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 17 12:14:55.775356 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:14:55.787450 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:14:55.800659 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 17 12:14:55.805156 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:14:55.820528 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 17 12:14:55.837207 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 17 12:14:55.867790 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:14:55.876615 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:14:55.920767 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:14:55.932461 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 17 12:14:55.960711 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 17 12:14:55.971853 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:14:55.975768 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:14:55.982006 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:14:56.001497 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 17 12:14:56.022432 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:14:56.027072 kernel: cryptd: max_cpu_qlen set to 1000 Jan 17 12:14:56.045859 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:14:56.054959 kernel: AVX2 version of gcm_enc/dec engaged. Jan 17 12:14:56.054989 kernel: AES CTR mode by8 optimization enabled Jan 17 12:14:56.046098 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:14:56.064337 kernel: hv_vmbus: Vmbus version:5.2 Jan 17 12:14:56.061013 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:14:56.066985 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:14:56.067321 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:56.077491 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:56.092959 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:56.103332 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 17 12:14:56.103739 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:14:56.105410 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:56.122127 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 17 12:14:56.122192 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 17 12:14:56.122966 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:14:56.148385 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 17 12:14:56.151045 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:14:56.154495 kernel: PTP clock support registered Jan 17 12:14:56.166628 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 17 12:14:56.192056 kernel: hv_vmbus: registering driver hv_storvsc Jan 17 12:14:56.192094 kernel: hv_vmbus: registering driver hv_netvsc Jan 17 12:14:56.192114 kernel: hv_utils: Registering HyperV Utility Driver Jan 17 12:14:56.192134 kernel: hv_vmbus: registering driver hv_utils Jan 17 12:14:56.192153 kernel: hv_utils: Heartbeat IC version 3.0 Jan 17 12:14:56.192172 kernel: hv_utils: Shutdown IC version 3.2 Jan 17 12:14:56.192203 kernel: hv_utils: TimeSync IC version 4.0 Jan 17 12:14:56.441586 systemd-resolved[209]: Clock change detected. Flushing caches. Jan 17 12:14:56.458178 kernel: scsi host1: storvsc_host_t Jan 17 12:14:56.458548 kernel: scsi host0: storvsc_host_t Jan 17 12:14:56.458751 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 17 12:14:56.458816 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 17 12:14:56.467167 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 17 12:14:56.483789 kernel: hv_vmbus: registering driver hid_hyperv Jan 17 12:14:56.484281 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:14:56.502227 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 17 12:14:56.502307 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 17 12:14:56.509069 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 17 12:14:56.511281 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 17 12:14:56.511306 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 17 12:14:56.527023 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 17 12:14:56.541400 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 17 12:14:56.541703 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 17 12:14:56.542621 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 17 12:14:56.542805 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 17 12:14:56.542972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:56.542992 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 17 12:14:56.623142 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: VF slot 1 added Jan 17 12:14:56.634103 kernel: hv_vmbus: registering driver hv_pci Jan 17 12:14:56.634173 kernel: hv_pci 58f59e6c-ee92-41b4-aaef-ca04617e9a57: PCI VMBus probing: Using version 0x10004 Jan 17 12:14:56.678080 kernel: hv_pci 58f59e6c-ee92-41b4-aaef-ca04617e9a57: PCI host bridge to bus ee92:00 Jan 17 12:14:56.678296 kernel: pci_bus ee92:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 17 12:14:56.678494 kernel: pci_bus ee92:00: No busn resource found for root bus, will use [bus 00-ff] Jan 17 12:14:56.678665 kernel: pci ee92:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 17 12:14:56.678882 kernel: pci ee92:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 12:14:56.679064 kernel: pci ee92:00:02.0: enabling Extended Tags Jan 17 12:14:56.679232 kernel: pci ee92:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at ee92:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 17 12:14:56.679402 kernel: pci_bus ee92:00: busn_res: [bus 00-ff] end is updated to 00 Jan 17 12:14:56.679553 kernel: pci ee92:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 17 12:14:56.842561 kernel: mlx5_core ee92:00:02.0: enabling device (0000 -> 0002) Jan 17 12:14:57.076528 kernel: mlx5_core ee92:00:02.0: firmware version: 14.30.5000 Jan 17 12:14:57.076751 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: VF registering: eth1 Jan 17 12:14:57.077484 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Jan 17 12:14:57.077513 kernel: mlx5_core ee92:00:02.0 eth1: joined to eth0 Jan 17 12:14:57.077711 kernel: mlx5_core ee92:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 17 12:14:57.047488 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 17 12:14:57.089820 kernel: mlx5_core ee92:00:02.0 enP61074s1: renamed from eth1 Jan 17 12:14:57.104234 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:14:57.121192 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 17 12:14:57.165805 kernel: BTRFS: device fsid e459b8ee-f1f7-4c3d-a087-3f1955f52c85 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (444) Jan 17 12:14:57.186525 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 17 12:14:57.193068 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 17 12:14:57.204944 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 17 12:14:57.216783 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:57.225784 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:58.233072 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 17 12:14:58.233366 disk-uuid[598]: The operation has completed successfully. Jan 17 12:14:58.341061 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 17 12:14:58.341180 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 17 12:14:58.353954 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 17 12:14:58.360456 sh[684]: Success Jan 17 12:14:58.392024 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 17 12:14:58.578407 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 17 12:14:58.598907 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 17 12:14:58.601349 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 17 12:14:58.623782 kernel: BTRFS info (device dm-0): first mount of filesystem e459b8ee-f1f7-4c3d-a087-3f1955f52c85 Jan 17 12:14:58.623831 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:14:58.629139 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 17 12:14:58.631903 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 17 12:14:58.634428 kernel: BTRFS info (device dm-0): using free space tree Jan 17 12:14:58.921709 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 17 12:14:58.923780 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 17 12:14:58.937028 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 17 12:14:58.942938 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 17 12:14:58.964068 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:14:58.964127 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:14:58.964153 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:14:58.983786 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:14:58.999785 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:14:58.999271 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 17 12:14:59.009733 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 17 12:14:59.020968 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 17 12:14:59.040593 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:14:59.050034 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:14:59.071975 systemd-networkd[868]: lo: Link UP Jan 17 12:14:59.071983 systemd-networkd[868]: lo: Gained carrier Jan 17 12:14:59.074076 systemd-networkd[868]: Enumeration completed Jan 17 12:14:59.074331 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:14:59.078852 systemd[1]: Reached target network.target - Network. Jan 17 12:14:59.080002 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:14:59.080005 systemd-networkd[868]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:14:59.143815 kernel: mlx5_core ee92:00:02.0 enP61074s1: Link up Jan 17 12:14:59.180900 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: Data path switched to VF: enP61074s1 Jan 17 12:14:59.181484 systemd-networkd[868]: enP61074s1: Link UP Jan 17 12:14:59.181608 systemd-networkd[868]: eth0: Link UP Jan 17 12:14:59.181836 systemd-networkd[868]: eth0: Gained carrier Jan 17 12:14:59.181851 systemd-networkd[868]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:14:59.192907 systemd-networkd[868]: enP61074s1: Gained carrier Jan 17 12:14:59.219831 systemd-networkd[868]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 12:14:59.912733 ignition[837]: Ignition 2.19.0 Jan 17 12:14:59.912747 ignition[837]: Stage: fetch-offline Jan 17 12:14:59.915268 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:14:59.912812 ignition[837]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:14:59.912823 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:14:59.912954 ignition[837]: parsed url from cmdline: "" Jan 17 12:14:59.912960 ignition[837]: no config URL provided Jan 17 12:14:59.912968 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:14:59.930942 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 17 12:14:59.912977 ignition[837]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:14:59.912986 ignition[837]: failed to fetch config: resource requires networking Jan 17 12:14:59.913501 ignition[837]: Ignition finished successfully Jan 17 12:14:59.947184 ignition[877]: Ignition 2.19.0 Jan 17 12:14:59.947196 ignition[877]: Stage: fetch Jan 17 12:14:59.947448 ignition[877]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:14:59.947462 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:14:59.947584 ignition[877]: parsed url from cmdline: "" Jan 17 12:14:59.947588 ignition[877]: no config URL provided Jan 17 12:14:59.947596 ignition[877]: reading system config file "/usr/lib/ignition/user.ign" Jan 17 12:14:59.947605 ignition[877]: no config at "/usr/lib/ignition/user.ign" Jan 17 12:14:59.947629 ignition[877]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 17 12:15:00.040756 ignition[877]: GET result: OK Jan 17 12:15:00.040924 ignition[877]: config has been read from IMDS userdata Jan 17 12:15:00.040968 ignition[877]: parsing config with SHA512: 6c5d8d6a91f3dc3568efaf635956af761e71192461618373084d93433f8c1c1c927b8c6afeadf28b1c0bf249f91ec1e6463f0fbbdbd97e0eeb7593466e0c4275 Jan 17 12:15:00.050790 unknown[877]: fetched base config from "system" Jan 17 12:15:00.051711 ignition[877]: fetch: fetch complete Jan 17 12:15:00.050813 unknown[877]: fetched base config from "system" Jan 17 12:15:00.051720 ignition[877]: fetch: fetch passed Jan 17 12:15:00.050821 unknown[877]: fetched user config from "azure" Jan 17 12:15:00.053260 ignition[877]: Ignition finished successfully Jan 17 12:15:00.061371 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 17 12:15:00.071044 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 17 12:15:00.089999 ignition[884]: Ignition 2.19.0 Jan 17 12:15:00.090013 ignition[884]: Stage: kargs Jan 17 12:15:00.090262 ignition[884]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:00.090276 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:00.091226 ignition[884]: kargs: kargs passed Jan 17 12:15:00.091287 ignition[884]: Ignition finished successfully Jan 17 12:15:00.102685 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 17 12:15:00.112033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 17 12:15:00.129037 ignition[890]: Ignition 2.19.0 Jan 17 12:15:00.129051 ignition[890]: Stage: disks Jan 17 12:15:00.131451 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 17 12:15:00.129318 ignition[890]: no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:00.134291 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 17 12:15:00.129335 ignition[890]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:00.130319 ignition[890]: disks: disks passed Jan 17 12:15:00.130375 ignition[890]: Ignition finished successfully Jan 17 12:15:00.138000 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:15:00.138381 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:15:00.138809 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:15:00.139223 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:15:00.168008 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 17 12:15:00.235599 systemd-fsck[898]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 17 12:15:00.239959 systemd-networkd[868]: enP61074s1: Gained IPv6LL Jan 17 12:15:00.244653 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 17 12:15:00.257304 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 17 12:15:00.352790 kernel: EXT4-fs (sda9): mounted filesystem 0ba4fe0e-76d7-406f-b570-4642d86198f6 r/w with ordered data mode. Quota mode: none. Jan 17 12:15:00.353728 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 17 12:15:00.355540 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 17 12:15:00.392901 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:15:00.398405 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 17 12:15:00.405966 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 17 12:15:00.411988 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (909) Jan 17 12:15:00.418808 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:15:00.419984 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 17 12:15:00.431289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:15:00.431320 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:15:00.431332 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:15:00.421023 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:15:00.436197 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:15:00.442116 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 17 12:15:00.450947 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 17 12:15:00.996666 coreos-metadata[911]: Jan 17 12:15:00.996 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:15:01.002512 coreos-metadata[911]: Jan 17 12:15:01.002 INFO Fetch successful Jan 17 12:15:01.005061 coreos-metadata[911]: Jan 17 12:15:01.002 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:15:01.011528 initrd-setup-root[937]: cut: /sysroot/etc/passwd: No such file or directory Jan 17 12:15:01.010450 systemd-networkd[868]: eth0: Gained IPv6LL Jan 17 12:15:01.019088 coreos-metadata[911]: Jan 17 12:15:01.014 INFO Fetch successful Jan 17 12:15:01.019088 coreos-metadata[911]: Jan 17 12:15:01.014 INFO wrote hostname ci-4081.3.0-a-bcafed7e46 to /sysroot/etc/hostname Jan 17 12:15:01.016145 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:15:01.047374 initrd-setup-root[945]: cut: /sysroot/etc/group: No such file or directory Jan 17 12:15:01.054945 initrd-setup-root[952]: cut: /sysroot/etc/shadow: No such file or directory Jan 17 12:15:01.076680 initrd-setup-root[959]: cut: /sysroot/etc/gshadow: No such file or directory Jan 17 12:15:02.159230 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 17 12:15:02.167886 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 17 12:15:02.174396 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 17 12:15:02.184892 kernel: BTRFS info (device sda6): last unmount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:15:02.184462 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 17 12:15:02.216438 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 17 12:15:02.222758 ignition[1027]: INFO : Ignition 2.19.0 Jan 17 12:15:02.222758 ignition[1027]: INFO : Stage: mount Jan 17 12:15:02.226419 ignition[1027]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:02.226419 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:02.232559 ignition[1027]: INFO : mount: mount passed Jan 17 12:15:02.234590 ignition[1027]: INFO : Ignition finished successfully Jan 17 12:15:02.234236 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 17 12:15:02.243899 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 17 12:15:02.252927 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 17 12:15:02.272757 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1039) Jan 17 12:15:02.272854 kernel: BTRFS info (device sda6): first mount of filesystem a70a40d6-5ab2-4665-81b1-b8e9f58c5ff8 Jan 17 12:15:02.275879 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 17 12:15:02.279070 kernel: BTRFS info (device sda6): using free space tree Jan 17 12:15:02.284790 kernel: BTRFS info (device sda6): auto enabling async discard Jan 17 12:15:02.286659 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 17 12:15:02.317853 ignition[1055]: INFO : Ignition 2.19.0 Jan 17 12:15:02.317853 ignition[1055]: INFO : Stage: files Jan 17 12:15:02.322106 ignition[1055]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:02.322106 ignition[1055]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:02.322106 ignition[1055]: DEBUG : files: compiled without relabeling support, skipping Jan 17 12:15:02.322106 ignition[1055]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 17 12:15:02.322106 ignition[1055]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 17 12:15:02.384322 ignition[1055]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 17 12:15:02.388684 ignition[1055]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 17 12:15:02.388684 ignition[1055]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 17 12:15:02.384901 unknown[1055]: wrote ssh authorized keys file for user: core Jan 17 12:15:02.415036 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:15:02.419483 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 17 12:15:02.419483 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:15:02.419483 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 17 12:15:02.488026 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 17 12:15:02.662791 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:02.717826 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:02.717826 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:02.717826 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jan 17 12:15:03.146672 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 17 12:15:03.492293 ignition[1055]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jan 17 12:15:03.492293 ignition[1055]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 17 12:15:03.504388 ignition[1055]: INFO : files: files passed Jan 17 12:15:03.504388 ignition[1055]: INFO : Ignition finished successfully Jan 17 12:15:03.497923 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 17 12:15:03.564007 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 17 12:15:03.570028 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 17 12:15:03.577217 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 17 12:15:03.578388 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 17 12:15:03.591807 initrd-setup-root-after-ignition[1084]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:15:03.591807 initrd-setup-root-after-ignition[1084]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:15:03.599991 initrd-setup-root-after-ignition[1088]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 17 12:15:03.601588 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:15:03.610606 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 17 12:15:03.623076 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 17 12:15:03.657201 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 17 12:15:03.657331 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 17 12:15:03.663503 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 17 12:15:03.671476 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 17 12:15:03.676589 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 17 12:15:03.689085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 17 12:15:03.704101 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:15:03.712970 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 17 12:15:03.726218 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:15:03.727690 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:15:03.728650 systemd[1]: Stopped target timers.target - Timer Units. Jan 17 12:15:03.729103 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 17 12:15:03.729226 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 17 12:15:03.730357 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 17 12:15:03.730815 systemd[1]: Stopped target basic.target - Basic System. Jan 17 12:15:03.731221 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 17 12:15:03.731641 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 17 12:15:03.732072 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 17 12:15:03.732487 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 17 12:15:03.732909 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 17 12:15:03.733322 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 17 12:15:03.733728 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 17 12:15:03.734692 systemd[1]: Stopped target swap.target - Swaps. Jan 17 12:15:03.735014 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 17 12:15:03.735167 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 17 12:15:03.735823 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:15:03.736236 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:15:03.849635 ignition[1108]: INFO : Ignition 2.19.0 Jan 17 12:15:03.849635 ignition[1108]: INFO : Stage: umount Jan 17 12:15:03.849635 ignition[1108]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 17 12:15:03.849635 ignition[1108]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 17 12:15:03.849635 ignition[1108]: INFO : umount: umount passed Jan 17 12:15:03.849635 ignition[1108]: INFO : Ignition finished successfully Jan 17 12:15:03.736593 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 17 12:15:03.750201 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:15:03.776865 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 17 12:15:03.777046 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 17 12:15:03.792376 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 17 12:15:03.792538 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 17 12:15:03.798061 systemd[1]: ignition-files.service: Deactivated successfully. Jan 17 12:15:03.798216 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 17 12:15:03.802935 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 17 12:15:03.803081 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 17 12:15:03.818852 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 17 12:15:03.821295 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 17 12:15:03.821468 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:15:03.832023 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 17 12:15:03.835582 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 17 12:15:03.835785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:15:03.838816 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 17 12:15:03.838975 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 17 12:15:03.855039 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 17 12:15:03.855168 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 17 12:15:03.860523 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 17 12:15:03.860633 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 17 12:15:03.928741 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 17 12:15:03.931170 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 17 12:15:03.936085 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 17 12:15:03.936150 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 17 12:15:03.941018 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 17 12:15:03.941064 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 17 12:15:03.947598 systemd[1]: Stopped target network.target - Network. Jan 17 12:15:03.954061 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 17 12:15:03.954129 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 17 12:15:03.962007 systemd[1]: Stopped target paths.target - Path Units. Jan 17 12:15:03.964070 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 17 12:15:03.970229 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:15:03.973472 systemd[1]: Stopped target slices.target - Slice Units. Jan 17 12:15:03.975578 systemd[1]: Stopped target sockets.target - Socket Units. Jan 17 12:15:03.977852 systemd[1]: iscsid.socket: Deactivated successfully. Jan 17 12:15:03.977898 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 17 12:15:03.985896 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 17 12:15:03.987980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 17 12:15:03.994966 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 17 12:15:03.995033 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 17 12:15:03.999670 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 17 12:15:03.999725 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 17 12:15:04.009673 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 17 12:15:04.014402 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 17 12:15:04.020431 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 17 12:15:04.021004 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 17 12:15:04.021101 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 17 12:15:04.025020 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 17 12:15:04.025086 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 17 12:15:04.038876 systemd-networkd[868]: eth0: DHCPv6 lease lost Jan 17 12:15:04.042458 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 17 12:15:04.042581 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 17 12:15:04.047070 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 17 12:15:04.047104 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:15:04.063882 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 17 12:15:04.068483 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 17 12:15:04.068568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 17 12:15:04.077169 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:15:04.083573 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 17 12:15:04.083721 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 17 12:15:04.099142 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 17 12:15:04.099283 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:15:04.106313 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 17 12:15:04.106388 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 17 12:15:04.111702 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 17 12:15:04.111791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:15:04.123552 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 17 12:15:04.126112 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:15:04.127825 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 17 12:15:04.127911 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 17 12:15:04.159864 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: Data path switched from VF: enP61074s1 Jan 17 12:15:04.128260 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 17 12:15:04.128295 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:15:04.128653 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 17 12:15:04.128696 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 17 12:15:04.129559 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 17 12:15:04.129614 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 17 12:15:04.130894 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 17 12:15:04.130937 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 17 12:15:04.145158 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 17 12:15:04.152780 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 17 12:15:04.152863 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:15:04.162857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:15:04.162910 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:15:04.166319 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 17 12:15:04.166427 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 17 12:15:04.213958 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 17 12:15:04.214095 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 17 12:15:04.219014 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 17 12:15:04.235054 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 17 12:15:04.289377 systemd[1]: Switching root. Jan 17 12:15:04.325126 systemd-journald[176]: Journal stopped Jan 17 12:15:09.560747 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 17 12:15:09.560787 kernel: SELinux: policy capability network_peer_controls=1 Jan 17 12:15:09.560799 kernel: SELinux: policy capability open_perms=1 Jan 17 12:15:09.560810 kernel: SELinux: policy capability extended_socket_class=1 Jan 17 12:15:09.560817 kernel: SELinux: policy capability always_check_network=0 Jan 17 12:15:09.560828 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 17 12:15:09.560837 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 17 12:15:09.560848 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 17 12:15:09.560859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 17 12:15:09.560868 kernel: audit: type=1403 audit(1737116106.553:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 17 12:15:09.560884 systemd[1]: Successfully loaded SELinux policy in 186.198ms. Jan 17 12:15:09.560895 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.859ms. Jan 17 12:15:09.560906 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 17 12:15:09.560919 systemd[1]: Detected virtualization microsoft. Jan 17 12:15:09.560934 systemd[1]: Detected architecture x86-64. Jan 17 12:15:09.560944 systemd[1]: Detected first boot. Jan 17 12:15:09.560956 systemd[1]: Hostname set to . Jan 17 12:15:09.560968 systemd[1]: Initializing machine ID from random generator. Jan 17 12:15:09.560980 zram_generator::config[1167]: No configuration found. Jan 17 12:15:09.560993 systemd[1]: Populated /etc with preset unit settings. Jan 17 12:15:09.561005 systemd[1]: Queued start job for default target multi-user.target. Jan 17 12:15:09.561015 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 17 12:15:09.561028 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 17 12:15:09.561037 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 17 12:15:09.561050 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 17 12:15:09.561060 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 17 12:15:09.561074 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 17 12:15:09.561084 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 17 12:15:09.561097 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 17 12:15:09.561107 systemd[1]: Created slice user.slice - User and Session Slice. Jan 17 12:15:09.561119 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 17 12:15:09.561132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 17 12:15:09.561142 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 17 12:15:09.561156 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 17 12:15:09.561166 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 17 12:15:09.561179 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 17 12:15:09.561190 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 17 12:15:09.561201 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 17 12:15:09.561216 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 17 12:15:09.561235 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 17 12:15:09.561264 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 17 12:15:09.561285 systemd[1]: Reached target slices.target - Slice Units. Jan 17 12:15:09.561310 systemd[1]: Reached target swap.target - Swaps. Jan 17 12:15:09.561333 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 17 12:15:09.561354 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 17 12:15:09.561375 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 17 12:15:09.561399 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 17 12:15:09.561421 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 17 12:15:09.561440 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 17 12:15:09.561464 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 17 12:15:09.561487 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 17 12:15:09.561508 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 17 12:15:09.561530 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 17 12:15:09.561552 systemd[1]: Mounting media.mount - External Media Directory... Jan 17 12:15:09.561581 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:15:09.561602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 17 12:15:09.561626 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 17 12:15:09.561650 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 17 12:15:09.561673 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 17 12:15:09.561699 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:15:09.561727 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 17 12:15:09.561751 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 17 12:15:09.561890 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:15:09.561915 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:15:09.561935 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:15:09.561959 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 17 12:15:09.561981 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:15:09.562004 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 17 12:15:09.562031 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 17 12:15:09.562054 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 17 12:15:09.562079 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 17 12:15:09.562097 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 17 12:15:09.562117 kernel: loop: module loaded Jan 17 12:15:09.562138 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 17 12:15:09.562160 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 17 12:15:09.562180 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 17 12:15:09.562200 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:15:09.562253 systemd-journald[1288]: Collecting audit messages is disabled. Jan 17 12:15:09.562298 kernel: fuse: init (API version 7.39) Jan 17 12:15:09.562317 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 17 12:15:09.562342 systemd-journald[1288]: Journal started Jan 17 12:15:09.562382 systemd-journald[1288]: Runtime Journal (/run/log/journal/3d2cfd6c9c1f4c87b07c38f7719d928c) is 8.0M, max 158.8M, 150.8M free. Jan 17 12:15:09.572241 systemd[1]: Started systemd-journald.service - Journal Service. Jan 17 12:15:09.575799 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 17 12:15:09.579317 systemd[1]: Mounted media.mount - External Media Directory. Jan 17 12:15:09.583990 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 17 12:15:09.587052 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 17 12:15:09.589921 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 17 12:15:09.592625 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 17 12:15:09.596106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 17 12:15:09.599548 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 17 12:15:09.599846 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 17 12:15:09.606823 kernel: ACPI: bus type drm_connector registered Jan 17 12:15:09.606198 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:15:09.606420 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:15:09.609667 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:15:09.610252 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:15:09.613540 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:15:09.614220 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:15:09.617654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 17 12:15:09.617996 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 17 12:15:09.620919 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:15:09.621149 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:15:09.624215 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 17 12:15:09.627870 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 17 12:15:09.634238 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 17 12:15:09.656512 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 17 12:15:09.665897 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 17 12:15:09.677975 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 17 12:15:09.684950 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 17 12:15:09.689920 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 17 12:15:09.703950 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 17 12:15:09.707964 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:15:09.710934 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 17 12:15:09.713992 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:15:09.721027 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 17 12:15:09.725259 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 17 12:15:09.732312 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 17 12:15:09.735797 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 17 12:15:09.739939 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 17 12:15:09.757993 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 17 12:15:09.762208 systemd-journald[1288]: Time spent on flushing to /var/log/journal/3d2cfd6c9c1f4c87b07c38f7719d928c is 23.462ms for 949 entries. Jan 17 12:15:09.762208 systemd-journald[1288]: System Journal (/var/log/journal/3d2cfd6c9c1f4c87b07c38f7719d928c) is 8.0M, max 2.6G, 2.6G free. Jan 17 12:15:09.852052 systemd-journald[1288]: Received client request to flush runtime journal. Jan 17 12:15:09.765833 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 17 12:15:09.773152 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 17 12:15:09.784969 udevadm[1332]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 17 12:15:09.853686 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 17 12:15:09.864735 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 17 12:15:09.864759 systemd-tmpfiles[1327]: ACLs are not supported, ignoring. Jan 17 12:15:09.870072 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 17 12:15:09.877927 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 17 12:15:09.893419 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 17 12:15:10.009421 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 17 12:15:10.024021 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 17 12:15:10.043370 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Jan 17 12:15:10.043395 systemd-tmpfiles[1348]: ACLs are not supported, ignoring. Jan 17 12:15:10.047271 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 17 12:15:11.153935 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 17 12:15:11.162025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 17 12:15:11.192055 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jan 17 12:15:11.332550 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 17 12:15:11.348984 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 17 12:15:11.400650 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 17 12:15:11.452957 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 17 12:15:11.537278 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 17 12:15:11.543156 kernel: mousedev: PS/2 mouse device common for all mice Jan 17 12:15:11.543260 kernel: hv_vmbus: registering driver hv_balloon Jan 17 12:15:11.549868 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 17 12:15:11.549962 kernel: hv_vmbus: registering driver hyperv_fb Jan 17 12:15:11.555785 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 17 12:15:11.560827 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 17 12:15:11.568387 kernel: Console: switching to colour dummy device 80x25 Jan 17 12:15:11.600039 kernel: Console: switching to colour frame buffer device 128x48 Jan 17 12:15:11.742221 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:15:11.771946 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:15:11.772335 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:15:11.783994 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:15:11.830971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 17 12:15:11.831292 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:15:11.841652 systemd-networkd[1362]: lo: Link UP Jan 17 12:15:11.841658 systemd-networkd[1362]: lo: Gained carrier Jan 17 12:15:11.841998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 17 12:15:11.851425 systemd-networkd[1362]: Enumeration completed Jan 17 12:15:11.851960 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:15:11.851965 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:15:11.875778 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1357) Jan 17 12:15:11.911053 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 17 12:15:11.938018 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 17 12:15:11.943783 kernel: mlx5_core ee92:00:02.0 enP61074s1: Link up Jan 17 12:15:11.964796 kernel: hv_netvsc 6045bde0-affd-6045-bde0-affd6045bde0 eth0: Data path switched to VF: enP61074s1 Jan 17 12:15:11.969896 systemd-networkd[1362]: enP61074s1: Link UP Jan 17 12:15:11.970055 systemd-networkd[1362]: eth0: Link UP Jan 17 12:15:11.970064 systemd-networkd[1362]: eth0: Gained carrier Jan 17 12:15:11.970088 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:15:11.975066 systemd-networkd[1362]: enP61074s1: Gained carrier Jan 17 12:15:11.999883 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 12:15:12.051783 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 17 12:15:12.067376 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 17 12:15:12.108577 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 17 12:15:12.124053 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 17 12:15:12.172735 lvm[1448]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:15:12.202146 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 17 12:15:12.206178 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 17 12:15:12.214959 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 17 12:15:12.221187 lvm[1452]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 17 12:15:12.232366 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 17 12:15:12.254145 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 17 12:15:12.258084 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 17 12:15:12.261417 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 17 12:15:12.261637 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 17 12:15:12.264612 systemd[1]: Reached target machines.target - Containers. Jan 17 12:15:12.268420 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 17 12:15:12.276952 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 17 12:15:12.281426 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 17 12:15:12.284096 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:15:12.287946 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 17 12:15:12.296972 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 17 12:15:12.303950 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 17 12:15:12.308286 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 17 12:15:12.338343 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 17 12:15:12.364795 kernel: loop0: detected capacity change from 0 to 140768 Jan 17 12:15:12.386878 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 17 12:15:12.388017 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 17 12:15:12.784785 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 17 12:15:12.827815 kernel: loop1: detected capacity change from 0 to 142488 Jan 17 12:15:13.231983 systemd-networkd[1362]: enP61074s1: Gained IPv6LL Jan 17 12:15:13.327784 kernel: loop2: detected capacity change from 0 to 211296 Jan 17 12:15:13.379789 kernel: loop3: detected capacity change from 0 to 31056 Jan 17 12:15:13.732780 kernel: loop4: detected capacity change from 0 to 140768 Jan 17 12:15:13.746788 kernel: loop5: detected capacity change from 0 to 142488 Jan 17 12:15:13.758788 kernel: loop6: detected capacity change from 0 to 211296 Jan 17 12:15:13.765784 kernel: loop7: detected capacity change from 0 to 31056 Jan 17 12:15:13.770684 (sd-merge)[1476]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 17 12:15:13.771284 (sd-merge)[1476]: Merged extensions into '/usr'. Jan 17 12:15:13.774871 systemd[1]: Reloading requested from client PID 1463 ('systemd-sysext') (unit systemd-sysext.service)... Jan 17 12:15:13.774887 systemd[1]: Reloading... Jan 17 12:15:13.828788 zram_generator::config[1503]: No configuration found. Jan 17 12:15:13.936959 systemd-networkd[1362]: eth0: Gained IPv6LL Jan 17 12:15:14.020694 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:15:14.093067 systemd[1]: Reloading finished in 317 ms. Jan 17 12:15:14.106038 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 17 12:15:14.110377 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 17 12:15:14.122079 systemd[1]: Starting ensure-sysext.service... Jan 17 12:15:14.126929 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 17 12:15:14.133380 systemd[1]: Reloading requested from client PID 1570 ('systemctl') (unit ensure-sysext.service)... Jan 17 12:15:14.133511 systemd[1]: Reloading... Jan 17 12:15:14.163025 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 17 12:15:14.163991 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 17 12:15:14.165365 systemd-tmpfiles[1571]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 17 12:15:14.166004 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Jan 17 12:15:14.167984 systemd-tmpfiles[1571]: ACLs are not supported, ignoring. Jan 17 12:15:14.191124 systemd-tmpfiles[1571]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:15:14.191807 systemd-tmpfiles[1571]: Skipping /boot Jan 17 12:15:14.200877 zram_generator::config[1595]: No configuration found. Jan 17 12:15:14.215612 systemd-tmpfiles[1571]: Detected autofs mount point /boot during canonicalization of boot. Jan 17 12:15:14.216961 systemd-tmpfiles[1571]: Skipping /boot Jan 17 12:15:14.365248 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:15:14.441037 systemd[1]: Reloading finished in 305 ms. Jan 17 12:15:14.470458 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 17 12:15:14.484648 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:15:14.490380 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 17 12:15:14.496320 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 17 12:15:14.505140 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 17 12:15:14.513102 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 17 12:15:14.523526 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:15:14.526105 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:15:14.527682 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:15:14.540941 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:15:14.556775 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:15:14.561178 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:15:14.561370 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:15:14.562634 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:15:14.563899 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:15:14.569759 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:15:14.569996 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:15:14.581502 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:15:14.583218 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:15:14.600944 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:15:14.601355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 17 12:15:14.614140 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 17 12:15:14.622053 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 17 12:15:14.638114 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 17 12:15:14.649114 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 17 12:15:14.657118 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 17 12:15:14.658139 systemd[1]: Reached target time-set.target - System Time Set. Jan 17 12:15:14.662175 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 17 12:15:14.667335 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 17 12:15:14.672903 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 17 12:15:14.677724 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 17 12:15:14.677963 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 17 12:15:14.681426 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 17 12:15:14.681583 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 17 12:15:14.684675 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 17 12:15:14.684890 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 17 12:15:14.688489 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 17 12:15:14.688704 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 17 12:15:14.697996 systemd[1]: Finished ensure-sysext.service. Jan 17 12:15:14.708962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 17 12:15:14.709051 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 17 12:15:14.716143 systemd-resolved[1671]: Positive Trust Anchors: Jan 17 12:15:14.716164 systemd-resolved[1671]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 17 12:15:14.716210 systemd-resolved[1671]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 17 12:15:14.720152 systemd-resolved[1671]: Using system hostname 'ci-4081.3.0-a-bcafed7e46'. Jan 17 12:15:14.723272 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 17 12:15:14.726206 systemd[1]: Reached target network.target - Network. Jan 17 12:15:14.728408 systemd[1]: Reached target network-online.target - Network is Online. Jan 17 12:15:14.731227 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 17 12:15:14.734126 augenrules[1716]: No rules Jan 17 12:15:14.734862 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:15:15.174130 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 17 12:15:15.178145 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 17 12:15:17.377939 ldconfig[1459]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 17 12:15:17.391813 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 17 12:15:17.401954 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 17 12:15:17.412080 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 17 12:15:17.415251 systemd[1]: Reached target sysinit.target - System Initialization. Jan 17 12:15:17.418060 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 17 12:15:17.421244 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 17 12:15:17.424401 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 17 12:15:17.427008 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 17 12:15:17.430076 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 17 12:15:17.433243 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 17 12:15:17.433311 systemd[1]: Reached target paths.target - Path Units. Jan 17 12:15:17.435742 systemd[1]: Reached target timers.target - Timer Units. Jan 17 12:15:17.439808 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 17 12:15:17.444084 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 17 12:15:17.447797 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 17 12:15:17.451642 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 17 12:15:17.454404 systemd[1]: Reached target sockets.target - Socket Units. Jan 17 12:15:17.457177 systemd[1]: Reached target basic.target - Basic System. Jan 17 12:15:17.459868 systemd[1]: System is tainted: cgroupsv1 Jan 17 12:15:17.460045 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:15:17.460192 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 17 12:15:17.467932 systemd[1]: Starting chronyd.service - NTP client/server... Jan 17 12:15:17.472859 systemd[1]: Starting containerd.service - containerd container runtime... Jan 17 12:15:17.481921 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 17 12:15:17.492940 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 17 12:15:17.502404 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 17 12:15:17.509936 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 17 12:15:17.513917 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 17 12:15:17.514110 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 17 12:15:17.522726 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 17 12:15:17.526061 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 17 12:15:17.535921 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:17.544160 (chronyd)[1732]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 17 12:15:17.548514 jq[1739]: false Jan 17 12:15:17.550968 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 17 12:15:17.556807 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 17 12:15:17.561302 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 17 12:15:17.581838 chronyd[1752]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 17 12:15:17.582488 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 17 12:15:17.584823 KVP[1741]: KVP starting; pid is:1741 Jan 17 12:15:17.591946 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 17 12:15:17.603055 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 17 12:15:17.613669 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 17 12:15:17.620944 systemd[1]: Starting update-engine.service - Update Engine... Jan 17 12:15:17.632703 dbus-daemon[1735]: [system] SELinux support is enabled Jan 17 12:15:17.632840 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 17 12:15:17.632844 chronyd[1752]: Timezone right/UTC failed leap second check, ignoring Jan 17 12:15:17.633033 chronyd[1752]: Loaded seccomp filter (level 2) Jan 17 12:15:17.644569 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 17 12:15:17.654789 kernel: hv_utils: KVP IC version 4.0 Jan 17 12:15:17.655499 systemd[1]: Started chronyd.service - NTP client/server. Jan 17 12:15:17.655839 KVP[1741]: KVP LIC Version: 3.1 Jan 17 12:15:17.664241 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 17 12:15:17.665140 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 17 12:15:17.677098 systemd[1]: motdgen.service: Deactivated successfully. Jan 17 12:15:17.677377 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 17 12:15:17.691950 jq[1769]: true Jan 17 12:15:17.697165 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 17 12:15:17.718408 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 17 12:15:17.718851 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 17 12:15:17.755662 (ntainerd)[1782]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 17 12:15:17.773554 extend-filesystems[1740]: Found loop4 Jan 17 12:15:17.773554 extend-filesystems[1740]: Found loop5 Jan 17 12:15:17.773554 extend-filesystems[1740]: Found loop6 Jan 17 12:15:17.773554 extend-filesystems[1740]: Found loop7 Jan 17 12:15:17.773554 extend-filesystems[1740]: Found sda Jan 17 12:15:17.773554 extend-filesystems[1740]: Found sda1 Jan 17 12:15:17.773554 extend-filesystems[1740]: Found sda2 Jan 17 12:15:17.858899 extend-filesystems[1740]: Found sda3 Jan 17 12:15:17.858899 extend-filesystems[1740]: Found usr Jan 17 12:15:17.858899 extend-filesystems[1740]: Found sda4 Jan 17 12:15:17.858899 extend-filesystems[1740]: Found sda6 Jan 17 12:15:17.858899 extend-filesystems[1740]: Found sda7 Jan 17 12:15:17.858899 extend-filesystems[1740]: Found sda9 Jan 17 12:15:17.858899 extend-filesystems[1740]: Checking size of /dev/sda9 Jan 17 12:15:17.858899 extend-filesystems[1740]: Old size kept for /dev/sda9 Jan 17 12:15:17.858899 extend-filesystems[1740]: Found sr0 Jan 17 12:15:17.913521 update_engine[1766]: I20250117 12:15:17.805672 1766 main.cc:92] Flatcar Update Engine starting Jan 17 12:15:17.913521 update_engine[1766]: I20250117 12:15:17.823962 1766 update_check_scheduler.cc:74] Next update check in 11m46s Jan 17 12:15:17.776186 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 17 12:15:17.919040 jq[1781]: true Jan 17 12:15:17.919146 tar[1780]: linux-amd64/helm Jan 17 12:15:17.919412 coreos-metadata[1734]: Jan 17 12:15:17.910 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 17 12:15:17.776230 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 17 12:15:17.797110 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 17 12:15:17.797139 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 17 12:15:17.927072 coreos-metadata[1734]: Jan 17 12:15:17.926 INFO Fetch successful Jan 17 12:15:17.827952 systemd[1]: Started update-engine.service - Update Engine. Jan 17 12:15:17.927352 coreos-metadata[1734]: Jan 17 12:15:17.927 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 17 12:15:17.834061 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 17 12:15:17.837730 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 17 12:15:17.861099 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 17 12:15:17.861419 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 17 12:15:17.911116 systemd-logind[1758]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 17 12:15:17.913028 systemd-logind[1758]: New seat seat0. Jan 17 12:15:17.915073 systemd[1]: Started systemd-logind.service - User Login Management. Jan 17 12:15:17.938175 coreos-metadata[1734]: Jan 17 12:15:17.932 INFO Fetch successful Jan 17 12:15:17.938175 coreos-metadata[1734]: Jan 17 12:15:17.932 INFO Fetching http://168.63.129.16/machine/5e8b7ed2-2105-4e55-abfd-6c289e035f83/a3d082b4%2D2828%2D4358%2D8c99%2D0edac83d6ded.%5Fci%2D4081.3.0%2Da%2Dbcafed7e46?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 17 12:15:17.938175 coreos-metadata[1734]: Jan 17 12:15:17.934 INFO Fetch successful Jan 17 12:15:17.938175 coreos-metadata[1734]: Jan 17 12:15:17.935 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 17 12:15:17.955227 coreos-metadata[1734]: Jan 17 12:15:17.952 INFO Fetch successful Jan 17 12:15:18.019604 bash[1818]: Updated "/home/core/.ssh/authorized_keys" Jan 17 12:15:18.016150 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 17 12:15:18.031194 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 17 12:15:18.041197 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 17 12:15:18.042428 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 17 12:15:18.142778 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1833) Jan 17 12:15:18.339866 locksmithd[1799]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 17 12:15:18.497870 sshd_keygen[1767]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 17 12:15:18.536822 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 17 12:15:18.548281 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 17 12:15:18.559243 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 17 12:15:18.578636 systemd[1]: issuegen.service: Deactivated successfully. Jan 17 12:15:18.579054 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 17 12:15:18.598617 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 17 12:15:18.642968 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 17 12:15:18.655113 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 17 12:15:18.664489 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 17 12:15:18.680942 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 17 12:15:18.685204 systemd[1]: Reached target getty.target - Login Prompts. Jan 17 12:15:18.828085 tar[1780]: linux-amd64/LICENSE Jan 17 12:15:18.828348 tar[1780]: linux-amd64/README.md Jan 17 12:15:18.846626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 17 12:15:18.903871 containerd[1782]: time="2025-01-17T12:15:18.902556600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 17 12:15:18.935618 containerd[1782]: time="2025-01-17T12:15:18.935556900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940491400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940537000Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940559100Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940734100Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940753000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940850700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.940867600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.941138600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.941157800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.941175500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942199 containerd[1782]: time="2025-01-17T12:15:18.941189200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942663 containerd[1782]: time="2025-01-17T12:15:18.941268400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942663 containerd[1782]: time="2025-01-17T12:15:18.941499100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942663 containerd[1782]: time="2025-01-17T12:15:18.941684600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 17 12:15:18.942663 containerd[1782]: time="2025-01-17T12:15:18.941702500Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 17 12:15:18.942663 containerd[1782]: time="2025-01-17T12:15:18.941927500Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 17 12:15:18.942663 containerd[1782]: time="2025-01-17T12:15:18.942006800Z" level=info msg="metadata content store policy set" policy=shared Jan 17 12:15:18.966169 containerd[1782]: time="2025-01-17T12:15:18.966118900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 17 12:15:18.966304 containerd[1782]: time="2025-01-17T12:15:18.966191300Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 17 12:15:18.966304 containerd[1782]: time="2025-01-17T12:15:18.966213500Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 17 12:15:18.966304 containerd[1782]: time="2025-01-17T12:15:18.966233100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 17 12:15:18.966304 containerd[1782]: time="2025-01-17T12:15:18.966250500Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.966446400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.967048700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.967233000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.967271600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.967293100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.967313600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.967387 containerd[1782]: time="2025-01-17T12:15:18.967344100Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.967845 containerd[1782]: time="2025-01-17T12:15:18.967363300Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.967981 containerd[1782]: time="2025-01-17T12:15:18.967875300Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.967981 containerd[1782]: time="2025-01-17T12:15:18.967922300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.967981 containerd[1782]: time="2025-01-17T12:15:18.967950600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.967981 containerd[1782]: time="2025-01-17T12:15:18.967978600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.968156 containerd[1782]: time="2025-01-17T12:15:18.968002200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 17 12:15:18.968156 containerd[1782]: time="2025-01-17T12:15:18.968038100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968156 containerd[1782]: time="2025-01-17T12:15:18.968065300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968156 containerd[1782]: time="2025-01-17T12:15:18.968086600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968302 containerd[1782]: time="2025-01-17T12:15:18.968153800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968302 containerd[1782]: time="2025-01-17T12:15:18.968181100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968302 containerd[1782]: time="2025-01-17T12:15:18.968207400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968302 containerd[1782]: time="2025-01-17T12:15:18.968230200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968302 containerd[1782]: time="2025-01-17T12:15:18.968254900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968302 containerd[1782]: time="2025-01-17T12:15:18.968276900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968306600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968331300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968355500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968383900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968413900Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968451600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968475600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968505 containerd[1782]: time="2025-01-17T12:15:18.968498700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968560400Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968593200Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968616300Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968639800Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968659600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968681300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968700900Z" level=info msg="NRI interface is disabled by configuration." Jan 17 12:15:18.968784 containerd[1782]: time="2025-01-17T12:15:18.968721100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 17 12:15:18.970793 containerd[1782]: time="2025-01-17T12:15:18.969174800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 17 12:15:18.970793 containerd[1782]: time="2025-01-17T12:15:18.969987900Z" level=info msg="Connect containerd service" Jan 17 12:15:18.970793 containerd[1782]: time="2025-01-17T12:15:18.970086500Z" level=info msg="using legacy CRI server" Jan 17 12:15:18.970793 containerd[1782]: time="2025-01-17T12:15:18.970098500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 17 12:15:18.970793 containerd[1782]: time="2025-01-17T12:15:18.970263700Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 17 12:15:18.972140 containerd[1782]: time="2025-01-17T12:15:18.972066800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 17 12:15:18.972446 containerd[1782]: time="2025-01-17T12:15:18.972317500Z" level=info msg="Start subscribing containerd event" Jan 17 12:15:18.972578 containerd[1782]: time="2025-01-17T12:15:18.972519700Z" level=info msg="Start recovering state" Jan 17 12:15:18.972708 containerd[1782]: time="2025-01-17T12:15:18.972670700Z" level=info msg="Start event monitor" Jan 17 12:15:18.972832 containerd[1782]: time="2025-01-17T12:15:18.972697600Z" level=info msg="Start snapshots syncer" Jan 17 12:15:18.972832 containerd[1782]: time="2025-01-17T12:15:18.972784400Z" level=info msg="Start cni network conf syncer for default" Jan 17 12:15:18.972832 containerd[1782]: time="2025-01-17T12:15:18.972795300Z" level=info msg="Start streaming server" Jan 17 12:15:18.973362 containerd[1782]: time="2025-01-17T12:15:18.973332300Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 17 12:15:18.974913 containerd[1782]: time="2025-01-17T12:15:18.973422400Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 17 12:15:18.973692 systemd[1]: Started containerd.service - containerd container runtime. Jan 17 12:15:18.975104 containerd[1782]: time="2025-01-17T12:15:18.975084200Z" level=info msg="containerd successfully booted in 0.073532s" Jan 17 12:15:19.392951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:19.398104 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 17 12:15:19.401544 systemd[1]: Startup finished in 794ms (firmware) + 27.032s (loader) + 12.437s (kernel) + 13.032s (userspace) = 53.296s. Jan 17 12:15:19.408446 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:19.741791 login[1898]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:15:19.746901 login[1900]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 17 12:15:19.759895 systemd-logind[1758]: New session 1 of user core. Jan 17 12:15:19.761993 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 17 12:15:19.770115 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 17 12:15:19.778821 systemd-logind[1758]: New session 2 of user core. Jan 17 12:15:19.789089 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 17 12:15:19.800398 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 17 12:15:19.817644 (systemd)[1935]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 17 12:15:20.019080 systemd[1935]: Queued start job for default target default.target. Jan 17 12:15:20.019583 systemd[1935]: Created slice app.slice - User Application Slice. Jan 17 12:15:20.019613 systemd[1935]: Reached target paths.target - Paths. Jan 17 12:15:20.019630 systemd[1935]: Reached target timers.target - Timers. Jan 17 12:15:20.028056 systemd[1935]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 17 12:15:20.038854 systemd[1935]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 17 12:15:20.038947 systemd[1935]: Reached target sockets.target - Sockets. Jan 17 12:15:20.038969 systemd[1935]: Reached target basic.target - Basic System. Jan 17 12:15:20.039022 systemd[1935]: Reached target default.target - Main User Target. Jan 17 12:15:20.039063 systemd[1935]: Startup finished in 212ms. Jan 17 12:15:20.039413 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 17 12:15:20.046113 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 17 12:15:20.047086 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 17 12:15:20.278535 kubelet[1922]: E0117 12:15:20.278270 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:20.281638 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:20.282028 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:20.366389 waagent[1894]: 2025-01-17T12:15:20.366283Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 17 12:15:20.370215 waagent[1894]: 2025-01-17T12:15:20.370142Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.371292Z INFO Daemon Daemon Python: 3.11.9 Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.372385Z INFO Daemon Daemon Run daemon Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.373047Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.373879Z INFO Daemon Daemon Using waagent for provisioning Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.374437Z INFO Daemon Daemon Activate resource disk Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.375139Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.379157Z INFO Daemon Daemon Found device: None Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.379674Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.380650Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.383173Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 12:15:20.401254 waagent[1894]: 2025-01-17T12:15:20.384112Z INFO Daemon Daemon Running default provisioning handler Jan 17 12:15:20.404701 waagent[1894]: 2025-01-17T12:15:20.404617Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 17 12:15:20.411093 waagent[1894]: 2025-01-17T12:15:20.411032Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 17 12:15:20.419080 waagent[1894]: 2025-01-17T12:15:20.412069Z INFO Daemon Daemon cloud-init is enabled: False Jan 17 12:15:20.419080 waagent[1894]: 2025-01-17T12:15:20.412838Z INFO Daemon Daemon Copying ovf-env.xml Jan 17 12:15:20.554848 waagent[1894]: 2025-01-17T12:15:20.551051Z INFO Daemon Daemon Successfully mounted dvd Jan 17 12:15:20.566852 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 17 12:15:20.573163 waagent[1894]: 2025-01-17T12:15:20.568543Z INFO Daemon Daemon Detect protocol endpoint Jan 17 12:15:20.573163 waagent[1894]: 2025-01-17T12:15:20.569580Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 17 12:15:20.573163 waagent[1894]: 2025-01-17T12:15:20.570258Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 17 12:15:20.573163 waagent[1894]: 2025-01-17T12:15:20.570899Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 17 12:15:20.573163 waagent[1894]: 2025-01-17T12:15:20.571862Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 17 12:15:20.573163 waagent[1894]: 2025-01-17T12:15:20.572376Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 17 12:15:20.597218 waagent[1894]: 2025-01-17T12:15:20.597157Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 17 12:15:20.603609 waagent[1894]: 2025-01-17T12:15:20.598444Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 17 12:15:20.603609 waagent[1894]: 2025-01-17T12:15:20.598914Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 17 12:15:20.694664 waagent[1894]: 2025-01-17T12:15:20.694553Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 17 12:15:20.697556 waagent[1894]: 2025-01-17T12:15:20.697483Z INFO Daemon Daemon Forcing an update of the goal state. Jan 17 12:15:20.703313 waagent[1894]: 2025-01-17T12:15:20.703249Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 12:15:20.720102 waagent[1894]: 2025-01-17T12:15:20.720048Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 17 12:15:20.737209 waagent[1894]: 2025-01-17T12:15:20.721663Z INFO Daemon Jan 17 12:15:20.737209 waagent[1894]: 2025-01-17T12:15:20.723310Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: dd5316a3-e4bf-477e-8fd6-190de576313a eTag: 4619910711185231624 source: Fabric] Jan 17 12:15:20.737209 waagent[1894]: 2025-01-17T12:15:20.724843Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 17 12:15:20.737209 waagent[1894]: 2025-01-17T12:15:20.726292Z INFO Daemon Jan 17 12:15:20.737209 waagent[1894]: 2025-01-17T12:15:20.726628Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 17 12:15:20.737209 waagent[1894]: 2025-01-17T12:15:20.731395Z INFO Daemon Daemon Downloading artifacts profile blob Jan 17 12:15:20.801139 waagent[1894]: 2025-01-17T12:15:20.801054Z INFO Daemon Downloaded certificate {'thumbprint': 'CE027C47BCA30AB942AFEA07ADDEEE6757D290E9', 'hasPrivateKey': False} Jan 17 12:15:20.805957 waagent[1894]: 2025-01-17T12:15:20.805844Z INFO Daemon Downloaded certificate {'thumbprint': 'ADB00484391901A781BCF185430959EDCC9F157F', 'hasPrivateKey': True} Jan 17 12:15:20.811961 waagent[1894]: 2025-01-17T12:15:20.807243Z INFO Daemon Fetch goal state completed Jan 17 12:15:20.815243 waagent[1894]: 2025-01-17T12:15:20.815194Z INFO Daemon Daemon Starting provisioning Jan 17 12:15:20.821804 waagent[1894]: 2025-01-17T12:15:20.816273Z INFO Daemon Daemon Handle ovf-env.xml. Jan 17 12:15:20.821804 waagent[1894]: 2025-01-17T12:15:20.817272Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-bcafed7e46] Jan 17 12:15:20.850566 waagent[1894]: 2025-01-17T12:15:20.850486Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-bcafed7e46] Jan 17 12:15:20.858711 waagent[1894]: 2025-01-17T12:15:20.852094Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 17 12:15:20.858711 waagent[1894]: 2025-01-17T12:15:20.852951Z INFO Daemon Daemon Primary interface is [eth0] Jan 17 12:15:20.876558 systemd-networkd[1362]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 17 12:15:20.876566 systemd-networkd[1362]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 17 12:15:20.876617 systemd-networkd[1362]: eth0: DHCP lease lost Jan 17 12:15:20.877951 waagent[1894]: 2025-01-17T12:15:20.877887Z INFO Daemon Daemon Create user account if not exists Jan 17 12:15:20.891441 waagent[1894]: 2025-01-17T12:15:20.879568Z INFO Daemon Daemon User core already exists, skip useradd Jan 17 12:15:20.891441 waagent[1894]: 2025-01-17T12:15:20.880304Z INFO Daemon Daemon Configure sudoer Jan 17 12:15:20.891441 waagent[1894]: 2025-01-17T12:15:20.881421Z INFO Daemon Daemon Configure sshd Jan 17 12:15:20.891441 waagent[1894]: 2025-01-17T12:15:20.882545Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 17 12:15:20.891441 waagent[1894]: 2025-01-17T12:15:20.883191Z INFO Daemon Daemon Deploy ssh public key. Jan 17 12:15:20.893867 systemd-networkd[1362]: eth0: DHCPv6 lease lost Jan 17 12:15:20.925829 systemd-networkd[1362]: eth0: DHCPv4 address 10.200.8.43/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 17 12:15:30.532344 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 17 12:15:30.538022 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:30.651043 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:30.651314 (kubelet)[2011]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:31.253813 kubelet[2011]: E0117 12:15:31.253715 2011 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:31.258384 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:31.258718 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:41.436094 chronyd[1752]: Selected source PHC0 Jan 17 12:15:41.509199 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 17 12:15:41.515030 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:41.626158 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:41.636410 (kubelet)[2033]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:42.120389 kubelet[2033]: E0117 12:15:42.120282 2033 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:42.123510 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:42.123850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:50.967380 waagent[1894]: 2025-01-17T12:15:50.967294Z INFO Daemon Daemon Provisioning complete Jan 17 12:15:50.981943 waagent[1894]: 2025-01-17T12:15:50.981867Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 17 12:15:50.989351 waagent[1894]: 2025-01-17T12:15:50.983186Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 17 12:15:50.989351 waagent[1894]: 2025-01-17T12:15:50.984230Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 17 12:15:51.117684 waagent[2042]: 2025-01-17T12:15:51.117572Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 17 12:15:51.118234 waagent[2042]: 2025-01-17T12:15:51.117798Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 17 12:15:51.118234 waagent[2042]: 2025-01-17T12:15:51.117901Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 17 12:15:51.127625 waagent[2042]: 2025-01-17T12:15:51.127511Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 17 12:15:51.127910 waagent[2042]: 2025-01-17T12:15:51.127855Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:15:51.128015 waagent[2042]: 2025-01-17T12:15:51.127972Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:15:51.136338 waagent[2042]: 2025-01-17T12:15:51.136254Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 17 12:15:51.147403 waagent[2042]: 2025-01-17T12:15:51.147332Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 17 12:15:51.148022 waagent[2042]: 2025-01-17T12:15:51.147964Z INFO ExtHandler Jan 17 12:15:51.148130 waagent[2042]: 2025-01-17T12:15:51.148067Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: ab2c4398-f578-43b0-8584-e4ab6dfe3e10 eTag: 4619910711185231624 source: Fabric] Jan 17 12:15:51.148466 waagent[2042]: 2025-01-17T12:15:51.148411Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 17 12:15:51.149064 waagent[2042]: 2025-01-17T12:15:51.149008Z INFO ExtHandler Jan 17 12:15:51.149138 waagent[2042]: 2025-01-17T12:15:51.149094Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 17 12:15:51.152933 waagent[2042]: 2025-01-17T12:15:51.152890Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 17 12:15:51.248904 waagent[2042]: 2025-01-17T12:15:51.248722Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CE027C47BCA30AB942AFEA07ADDEEE6757D290E9', 'hasPrivateKey': False} Jan 17 12:15:51.249322 waagent[2042]: 2025-01-17T12:15:51.249262Z INFO ExtHandler Downloaded certificate {'thumbprint': 'ADB00484391901A781BCF185430959EDCC9F157F', 'hasPrivateKey': True} Jan 17 12:15:51.249814 waagent[2042]: 2025-01-17T12:15:51.249740Z INFO ExtHandler Fetch goal state completed Jan 17 12:15:51.265743 waagent[2042]: 2025-01-17T12:15:51.265658Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2042 Jan 17 12:15:51.265971 waagent[2042]: 2025-01-17T12:15:51.265909Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 17 12:15:51.267719 waagent[2042]: 2025-01-17T12:15:51.267655Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 17 12:15:51.268147 waagent[2042]: 2025-01-17T12:15:51.268094Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 17 12:15:51.290049 waagent[2042]: 2025-01-17T12:15:51.289997Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 17 12:15:51.290298 waagent[2042]: 2025-01-17T12:15:51.290249Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 17 12:15:51.297177 waagent[2042]: 2025-01-17T12:15:51.297132Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 17 12:15:51.304448 systemd[1]: Reloading requested from client PID 2057 ('systemctl') (unit waagent.service)... Jan 17 12:15:51.304466 systemd[1]: Reloading... Jan 17 12:15:51.392788 zram_generator::config[2091]: No configuration found. Jan 17 12:15:51.522606 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:15:51.599934 systemd[1]: Reloading finished in 294 ms. Jan 17 12:15:51.624791 waagent[2042]: 2025-01-17T12:15:51.622800Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 17 12:15:51.632553 systemd[1]: Reloading requested from client PID 2153 ('systemctl') (unit waagent.service)... Jan 17 12:15:51.632570 systemd[1]: Reloading... Jan 17 12:15:51.726950 zram_generator::config[2187]: No configuration found. Jan 17 12:15:51.853617 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:15:51.929997 systemd[1]: Reloading finished in 296 ms. Jan 17 12:15:51.957978 waagent[2042]: 2025-01-17T12:15:51.956468Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 17 12:15:51.957978 waagent[2042]: 2025-01-17T12:15:51.956691Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 17 12:15:52.315053 waagent[2042]: 2025-01-17T12:15:52.314869Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 17 12:15:52.315872 waagent[2042]: 2025-01-17T12:15:52.315797Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 17 12:15:52.316860 waagent[2042]: 2025-01-17T12:15:52.316786Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 17 12:15:52.317024 waagent[2042]: 2025-01-17T12:15:52.316963Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:15:52.317211 waagent[2042]: 2025-01-17T12:15:52.317155Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:15:52.317794 waagent[2042]: 2025-01-17T12:15:52.317703Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 17 12:15:52.318058 waagent[2042]: 2025-01-17T12:15:52.318004Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 17 12:15:52.318284 waagent[2042]: 2025-01-17T12:15:52.318220Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 17 12:15:52.318747 waagent[2042]: 2025-01-17T12:15:52.318668Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 17 12:15:52.318867 waagent[2042]: 2025-01-17T12:15:52.318740Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 17 12:15:52.319387 waagent[2042]: 2025-01-17T12:15:52.319309Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 17 12:15:52.319501 waagent[2042]: 2025-01-17T12:15:52.319429Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 17 12:15:52.319827 waagent[2042]: 2025-01-17T12:15:52.319753Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 17 12:15:52.320372 waagent[2042]: 2025-01-17T12:15:52.320304Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 17 12:15:52.320372 waagent[2042]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 17 12:15:52.320372 waagent[2042]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 17 12:15:52.320372 waagent[2042]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 17 12:15:52.320372 waagent[2042]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:15:52.320372 waagent[2042]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:15:52.320372 waagent[2042]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 17 12:15:52.320892 waagent[2042]: 2025-01-17T12:15:52.320815Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 17 12:15:52.323095 waagent[2042]: 2025-01-17T12:15:52.323027Z INFO EnvHandler ExtHandler Configure routes Jan 17 12:15:52.323508 waagent[2042]: 2025-01-17T12:15:52.323399Z INFO EnvHandler ExtHandler Gateway:None Jan 17 12:15:52.324025 waagent[2042]: 2025-01-17T12:15:52.323960Z INFO EnvHandler ExtHandler Routes:None Jan 17 12:15:52.328113 waagent[2042]: 2025-01-17T12:15:52.328073Z INFO ExtHandler ExtHandler Jan 17 12:15:52.328643 waagent[2042]: 2025-01-17T12:15:52.328594Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: bb43c29b-b707-4e41-9c96-b8d011585917 correlation 7fd65ffc-da5b-4ae6-abe8-7b5e09c906d6 created: 2025-01-17T12:14:15.049423Z] Jan 17 12:15:52.329938 waagent[2042]: 2025-01-17T12:15:52.329892Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 17 12:15:52.331381 waagent[2042]: 2025-01-17T12:15:52.331335Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 17 12:15:52.343888 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 17 12:15:52.356100 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:15:52.370422 waagent[2042]: 2025-01-17T12:15:52.370359Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: CB83027E-43F8-4D1B-B905-E554E7EB7EBD;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 17 12:15:52.385601 waagent[2042]: 2025-01-17T12:15:52.385512Z INFO MonitorHandler ExtHandler Network interfaces: Jan 17 12:15:52.385601 waagent[2042]: Executing ['ip', '-a', '-o', 'link']: Jan 17 12:15:52.385601 waagent[2042]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 17 12:15:52.385601 waagent[2042]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:af:fd brd ff:ff:ff:ff:ff:ff Jan 17 12:15:52.385601 waagent[2042]: 3: enP61074s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 60:45:bd:e0:af:fd brd ff:ff:ff:ff:ff:ff\ altname enP61074p0s2 Jan 17 12:15:52.385601 waagent[2042]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 17 12:15:52.385601 waagent[2042]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 17 12:15:52.385601 waagent[2042]: 2: eth0 inet 10.200.8.43/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 17 12:15:52.385601 waagent[2042]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 17 12:15:52.385601 waagent[2042]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 17 12:15:52.385601 waagent[2042]: 2: eth0 inet6 fe80::6245:bdff:fee0:affd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 12:15:52.385601 waagent[2042]: 3: enP61074s1 inet6 fe80::6245:bdff:fee0:affd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 17 12:15:52.538951 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:15:52.549161 (kubelet)[2286]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:15:52.594999 kubelet[2286]: E0117 12:15:52.594844 2286 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:15:52.597881 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:15:52.598230 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:15:53.063290 waagent[2042]: 2025-01-17T12:15:53.063140Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 17 12:15:53.063290 waagent[2042]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:15:53.063290 waagent[2042]: pkts bytes target prot opt in out source destination Jan 17 12:15:53.063290 waagent[2042]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:15:53.063290 waagent[2042]: pkts bytes target prot opt in out source destination Jan 17 12:15:53.063290 waagent[2042]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:15:53.063290 waagent[2042]: pkts bytes target prot opt in out source destination Jan 17 12:15:53.063290 waagent[2042]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 12:15:53.063290 waagent[2042]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 12:15:53.063290 waagent[2042]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 12:15:53.066569 waagent[2042]: 2025-01-17T12:15:53.066507Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 17 12:15:53.066569 waagent[2042]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:15:53.066569 waagent[2042]: pkts bytes target prot opt in out source destination Jan 17 12:15:53.066569 waagent[2042]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:15:53.066569 waagent[2042]: pkts bytes target prot opt in out source destination Jan 17 12:15:53.066569 waagent[2042]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 17 12:15:53.066569 waagent[2042]: pkts bytes target prot opt in out source destination Jan 17 12:15:53.066569 waagent[2042]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 17 12:15:53.066569 waagent[2042]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 17 12:15:53.066569 waagent[2042]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 17 12:15:53.067034 waagent[2042]: 2025-01-17T12:15:53.066852Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 17 12:15:59.680386 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 17 12:16:02.612967 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 17 12:16:02.625018 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:02.738948 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:02.748153 (kubelet)[2316]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:16:02.794815 kubelet[2316]: E0117 12:16:02.794739 2316 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:16:02.797653 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:16:02.797992 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:16:02.973055 update_engine[1766]: I20250117 12:16:02.972821 1766 update_attempter.cc:509] Updating boot flags... Jan 17 12:16:03.305802 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2338) Jan 17 12:16:12.862886 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 17 12:16:12.870998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:13.231961 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:13.236096 (kubelet)[2377]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:16:13.477644 kubelet[2377]: E0117 12:16:13.477576 2377 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:16:13.480471 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:16:13.480820 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:16:22.101175 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 17 12:16:22.106098 systemd[1]: Started sshd@0-10.200.8.43:22-10.200.16.10:41414.service - OpenSSH per-connection server daemon (10.200.16.10:41414). Jan 17 12:16:22.802687 sshd[2386]: Accepted publickey for core from 10.200.16.10 port 41414 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:22.804610 sshd[2386]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:22.809325 systemd-logind[1758]: New session 3 of user core. Jan 17 12:16:22.816188 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 17 12:16:23.377099 systemd[1]: Started sshd@1-10.200.8.43:22-10.200.16.10:41418.service - OpenSSH per-connection server daemon (10.200.16.10:41418). Jan 17 12:16:23.612711 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 17 12:16:23.618994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:24.031535 sshd[2391]: Accepted publickey for core from 10.200.16.10 port 41418 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:24.033187 sshd[2391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:24.038378 systemd-logind[1758]: New session 4 of user core. Jan 17 12:16:24.045082 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 17 12:16:24.073979 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:24.085225 (kubelet)[2407]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:16:24.216579 kubelet[2407]: E0117 12:16:24.216511 2407 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:16:24.219480 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:16:24.219836 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:16:24.503706 sshd[2391]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:24.508461 systemd[1]: sshd@1-10.200.8.43:22-10.200.16.10:41418.service: Deactivated successfully. Jan 17 12:16:24.512677 systemd-logind[1758]: Session 4 logged out. Waiting for processes to exit. Jan 17 12:16:24.513024 systemd[1]: session-4.scope: Deactivated successfully. Jan 17 12:16:24.514675 systemd-logind[1758]: Removed session 4. Jan 17 12:16:24.620384 systemd[1]: Started sshd@2-10.200.8.43:22-10.200.16.10:41430.service - OpenSSH per-connection server daemon (10.200.16.10:41430). Jan 17 12:16:25.262604 sshd[2420]: Accepted publickey for core from 10.200.16.10 port 41430 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:25.264449 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:25.269047 systemd-logind[1758]: New session 5 of user core. Jan 17 12:16:25.277053 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 17 12:16:25.727887 sshd[2420]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:25.731143 systemd[1]: sshd@2-10.200.8.43:22-10.200.16.10:41430.service: Deactivated successfully. Jan 17 12:16:25.735261 systemd-logind[1758]: Session 5 logged out. Waiting for processes to exit. Jan 17 12:16:25.737008 systemd[1]: session-5.scope: Deactivated successfully. Jan 17 12:16:25.738785 systemd-logind[1758]: Removed session 5. Jan 17 12:16:25.850385 systemd[1]: Started sshd@3-10.200.8.43:22-10.200.16.10:41446.service - OpenSSH per-connection server daemon (10.200.16.10:41446). Jan 17 12:16:26.493429 sshd[2428]: Accepted publickey for core from 10.200.16.10 port 41446 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:26.495225 sshd[2428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:26.499825 systemd-logind[1758]: New session 6 of user core. Jan 17 12:16:26.507078 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 17 12:16:26.959847 sshd[2428]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:26.965464 systemd[1]: sshd@3-10.200.8.43:22-10.200.16.10:41446.service: Deactivated successfully. Jan 17 12:16:26.969673 systemd[1]: session-6.scope: Deactivated successfully. Jan 17 12:16:26.970644 systemd-logind[1758]: Session 6 logged out. Waiting for processes to exit. Jan 17 12:16:26.971580 systemd-logind[1758]: Removed session 6. Jan 17 12:16:27.071474 systemd[1]: Started sshd@4-10.200.8.43:22-10.200.16.10:49008.service - OpenSSH per-connection server daemon (10.200.16.10:49008). Jan 17 12:16:27.725259 sshd[2436]: Accepted publickey for core from 10.200.16.10 port 49008 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:27.727096 sshd[2436]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:27.732105 systemd-logind[1758]: New session 7 of user core. Jan 17 12:16:27.738060 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 17 12:16:28.244166 sudo[2440]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 17 12:16:28.244555 sudo[2440]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:16:28.280287 sudo[2440]: pam_unix(sudo:session): session closed for user root Jan 17 12:16:28.392985 sshd[2436]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:28.397666 systemd[1]: sshd@4-10.200.8.43:22-10.200.16.10:49008.service: Deactivated successfully. Jan 17 12:16:28.402034 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 12:16:28.402753 systemd-logind[1758]: Session 7 logged out. Waiting for processes to exit. Jan 17 12:16:28.403721 systemd-logind[1758]: Removed session 7. Jan 17 12:16:28.512351 systemd[1]: Started sshd@5-10.200.8.43:22-10.200.16.10:49020.service - OpenSSH per-connection server daemon (10.200.16.10:49020). Jan 17 12:16:29.156444 sshd[2445]: Accepted publickey for core from 10.200.16.10 port 49020 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:29.158321 sshd[2445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:29.163481 systemd-logind[1758]: New session 8 of user core. Jan 17 12:16:29.170020 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 12:16:29.514025 sudo[2450]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 17 12:16:29.514393 sudo[2450]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:16:29.518006 sudo[2450]: pam_unix(sudo:session): session closed for user root Jan 17 12:16:29.522969 sudo[2449]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 17 12:16:29.523317 sudo[2449]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:16:29.540335 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 17 12:16:29.541742 auditctl[2453]: No rules Jan 17 12:16:29.543175 systemd[1]: audit-rules.service: Deactivated successfully. Jan 17 12:16:29.543615 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 17 12:16:29.547270 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 17 12:16:29.573612 augenrules[2472]: No rules Jan 17 12:16:29.575363 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 17 12:16:29.577834 sudo[2449]: pam_unix(sudo:session): session closed for user root Jan 17 12:16:29.688502 sshd[2445]: pam_unix(sshd:session): session closed for user core Jan 17 12:16:29.693396 systemd[1]: sshd@5-10.200.8.43:22-10.200.16.10:49020.service: Deactivated successfully. Jan 17 12:16:29.697353 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 12:16:29.698179 systemd-logind[1758]: Session 8 logged out. Waiting for processes to exit. Jan 17 12:16:29.699072 systemd-logind[1758]: Removed session 8. Jan 17 12:16:29.801358 systemd[1]: Started sshd@6-10.200.8.43:22-10.200.16.10:49036.service - OpenSSH per-connection server daemon (10.200.16.10:49036). Jan 17 12:16:30.444142 sshd[2481]: Accepted publickey for core from 10.200.16.10 port 49036 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:16:30.445952 sshd[2481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:16:30.451370 systemd-logind[1758]: New session 9 of user core. Jan 17 12:16:30.461127 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 12:16:30.801269 sudo[2485]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 17 12:16:30.801644 sudo[2485]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 17 12:16:31.808090 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 17 12:16:31.808275 (dockerd)[2500]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 17 12:16:33.027015 dockerd[2500]: time="2025-01-17T12:16:33.026949840Z" level=info msg="Starting up" Jan 17 12:16:33.461866 dockerd[2500]: time="2025-01-17T12:16:33.461818266Z" level=info msg="Loading containers: start." Jan 17 12:16:33.629058 kernel: Initializing XFRM netlink socket Jan 17 12:16:33.742116 systemd-networkd[1362]: docker0: Link UP Jan 17 12:16:33.768038 dockerd[2500]: time="2025-01-17T12:16:33.767988832Z" level=info msg="Loading containers: done." Jan 17 12:16:33.842982 dockerd[2500]: time="2025-01-17T12:16:33.842923840Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 17 12:16:33.843236 dockerd[2500]: time="2025-01-17T12:16:33.843064943Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 17 12:16:33.843236 dockerd[2500]: time="2025-01-17T12:16:33.843209746Z" level=info msg="Daemon has completed initialization" Jan 17 12:16:33.896665 dockerd[2500]: time="2025-01-17T12:16:33.896597691Z" level=info msg="API listen on /run/docker.sock" Jan 17 12:16:33.897166 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 17 12:16:34.232978 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jan 17 12:16:34.240092 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:34.417967 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:34.422288 (kubelet)[2647]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:16:34.468150 kubelet[2647]: E0117 12:16:34.468000 2647 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:16:34.471003 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:16:34.471325 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:16:36.128976 containerd[1782]: time="2025-01-17T12:16:36.128938065Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\"" Jan 17 12:16:36.849711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1785089772.mount: Deactivated successfully. Jan 17 12:16:38.605509 containerd[1782]: time="2025-01-17T12:16:38.605452147Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:38.608278 containerd[1782]: time="2025-01-17T12:16:38.608213408Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.13: active requests=0, bytes read=35140738" Jan 17 12:16:38.612549 containerd[1782]: time="2025-01-17T12:16:38.612491702Z" level=info msg="ImageCreate event name:\"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:38.616657 containerd[1782]: time="2025-01-17T12:16:38.616622493Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:38.618132 containerd[1782]: time="2025-01-17T12:16:38.617605514Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.13\" with image id \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.13\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e5c42861045d0615769fad8a4e32e476fc5e59020157b60ced1bb7a69d4a5ce9\", size \"35137530\" in 2.488623648s" Jan 17 12:16:38.618132 containerd[1782]: time="2025-01-17T12:16:38.617649415Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.13\" returns image reference \"sha256:724efdc6b8440d2c78ced040ad90bb8af5553b7ed46439937b567cca86ae5e1b\"" Jan 17 12:16:38.640162 containerd[1782]: time="2025-01-17T12:16:38.640122209Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\"" Jan 17 12:16:40.415105 containerd[1782]: time="2025-01-17T12:16:40.414977710Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:40.420177 containerd[1782]: time="2025-01-17T12:16:40.420102423Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.13: active requests=0, bytes read=32216649" Jan 17 12:16:40.424374 containerd[1782]: time="2025-01-17T12:16:40.424322215Z" level=info msg="ImageCreate event name:\"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:40.431414 containerd[1782]: time="2025-01-17T12:16:40.431361070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:40.432414 containerd[1782]: time="2025-01-17T12:16:40.432382493Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.13\" with image id \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.13\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:fc2838399752740bdd36c7e9287d4406feff6bef2baff393174b34ccd447b780\", size \"33663223\" in 1.791819274s" Jan 17 12:16:40.433111 containerd[1782]: time="2025-01-17T12:16:40.432526596Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.13\" returns image reference \"sha256:04dd549807d4487a115aab24e9c53dbb8c711ed9a3b138a206e161800b9975ab\"" Jan 17 12:16:40.455450 containerd[1782]: time="2025-01-17T12:16:40.455296896Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\"" Jan 17 12:16:41.678449 containerd[1782]: time="2025-01-17T12:16:41.678386373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:41.681149 containerd[1782]: time="2025-01-17T12:16:41.681099032Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.13: active requests=0, bytes read=17332849" Jan 17 12:16:41.685934 containerd[1782]: time="2025-01-17T12:16:41.685878537Z" level=info msg="ImageCreate event name:\"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:41.691592 containerd[1782]: time="2025-01-17T12:16:41.691541862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:41.692700 containerd[1782]: time="2025-01-17T12:16:41.692552784Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.13\" with image id \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.13\", repo digest \"registry.k8s.io/kube-scheduler@sha256:a4f1649a5249c0784963d85644b1e614548f032da9b4fb00a760bac02818ce4f\", size \"18779441\" in 1.237210987s" Jan 17 12:16:41.692700 containerd[1782]: time="2025-01-17T12:16:41.692591585Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.13\" returns image reference \"sha256:42b8a40668702c6f34141af8c536b486852dd3b2483c9b50a608d2377da8c8e8\"" Jan 17 12:16:41.716279 containerd[1782]: time="2025-01-17T12:16:41.716237604Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\"" Jan 17 12:16:42.930583 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2084497044.mount: Deactivated successfully. Jan 17 12:16:43.391144 containerd[1782]: time="2025-01-17T12:16:43.391085808Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.13\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:43.393585 containerd[1782]: time="2025-01-17T12:16:43.393526562Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.13: active requests=0, bytes read=28620949" Jan 17 12:16:43.398234 containerd[1782]: time="2025-01-17T12:16:43.398180864Z" level=info msg="ImageCreate event name:\"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:43.401891 containerd[1782]: time="2025-01-17T12:16:43.401827944Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:43.402987 containerd[1782]: time="2025-01-17T12:16:43.402414957Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.13\" with image id \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\", repo tag \"registry.k8s.io/kube-proxy:v1.29.13\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd45846de733434501e436638a7a240f2d379bf0a6bb0404a7684e0cf52c4011\", size \"28619960\" in 1.686129652s" Jan 17 12:16:43.402987 containerd[1782]: time="2025-01-17T12:16:43.402455658Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.13\" returns image reference \"sha256:f20cf1600da6cce7b7d3fdd3b5ff91243983ea8be3907cccaee1a956770a2f15\"" Jan 17 12:16:43.425364 containerd[1782]: time="2025-01-17T12:16:43.425320860Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 17 12:16:43.985860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346090649.mount: Deactivated successfully. Jan 17 12:16:44.612894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jan 17 12:16:44.622212 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:44.863956 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:44.867935 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 17 12:16:44.913152 kubelet[2790]: E0117 12:16:44.913090 2790 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 17 12:16:44.916029 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 17 12:16:44.916352 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 17 12:16:45.755510 containerd[1782]: time="2025-01-17T12:16:45.755444804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:45.758627 containerd[1782]: time="2025-01-17T12:16:45.758561080Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 17 12:16:45.764011 containerd[1782]: time="2025-01-17T12:16:45.763971513Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:45.769320 containerd[1782]: time="2025-01-17T12:16:45.769253142Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:45.770449 containerd[1782]: time="2025-01-17T12:16:45.770268066Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.344905405s" Jan 17 12:16:45.770449 containerd[1782]: time="2025-01-17T12:16:45.770310667Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 17 12:16:45.791881 containerd[1782]: time="2025-01-17T12:16:45.791804992Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 17 12:16:46.349086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3533050650.mount: Deactivated successfully. Jan 17 12:16:46.372098 containerd[1782]: time="2025-01-17T12:16:46.372020362Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:46.374566 containerd[1782]: time="2025-01-17T12:16:46.374479122Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 17 12:16:46.378818 containerd[1782]: time="2025-01-17T12:16:46.378726526Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:46.386089 containerd[1782]: time="2025-01-17T12:16:46.385941102Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:46.386849 containerd[1782]: time="2025-01-17T12:16:46.386587618Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 594.734825ms" Jan 17 12:16:46.386849 containerd[1782]: time="2025-01-17T12:16:46.386631219Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 17 12:16:46.411290 containerd[1782]: time="2025-01-17T12:16:46.411247420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jan 17 12:16:46.995840 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692579176.mount: Deactivated successfully. Jan 17 12:16:49.276024 containerd[1782]: time="2025-01-17T12:16:49.275959082Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:49.280344 containerd[1782]: time="2025-01-17T12:16:49.280262187Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jan 17 12:16:49.283483 containerd[1782]: time="2025-01-17T12:16:49.283417664Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:49.287584 containerd[1782]: time="2025-01-17T12:16:49.287530564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:16:49.288611 containerd[1782]: time="2025-01-17T12:16:49.288578790Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.877223966s" Jan 17 12:16:49.288865 containerd[1782]: time="2025-01-17T12:16:49.288716593Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jan 17 12:16:51.890782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:51.897055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:51.929517 systemd[1]: Reloading requested from client PID 2934 ('systemctl') (unit session-9.scope)... Jan 17 12:16:51.929534 systemd[1]: Reloading... Jan 17 12:16:52.010789 zram_generator::config[2970]: No configuration found. Jan 17 12:16:52.164747 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:16:52.251933 systemd[1]: Reloading finished in 321 ms. Jan 17 12:16:52.299950 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:52.306277 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:52.308981 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:16:52.309325 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:52.315286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:16:52.558958 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:16:52.573181 (kubelet)[3059]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:16:52.620892 kubelet[3059]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:16:52.620892 kubelet[3059]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:16:52.620892 kubelet[3059]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:16:52.621433 kubelet[3059]: I0117 12:16:52.620954 3059 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:16:52.775784 kubelet[3059]: I0117 12:16:52.775733 3059 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:16:52.775784 kubelet[3059]: I0117 12:16:52.775776 3059 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:16:52.776077 kubelet[3059]: I0117 12:16:52.776055 3059 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:16:53.164885 kubelet[3059]: E0117 12:16:53.164836 3059 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.166476 kubelet[3059]: I0117 12:16:53.166315 3059 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:16:53.176693 kubelet[3059]: I0117 12:16:53.176600 3059 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:16:53.178445 kubelet[3059]: I0117 12:16:53.178412 3059 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:16:53.178685 kubelet[3059]: I0117 12:16:53.178657 3059 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:16:53.179324 kubelet[3059]: I0117 12:16:53.179294 3059 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:16:53.179324 kubelet[3059]: I0117 12:16:53.179326 3059 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:16:53.179488 kubelet[3059]: I0117 12:16:53.179467 3059 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:16:53.179606 kubelet[3059]: I0117 12:16:53.179594 3059 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:16:53.179664 kubelet[3059]: I0117 12:16:53.179615 3059 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:16:53.179664 kubelet[3059]: I0117 12:16:53.179655 3059 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:16:53.179744 kubelet[3059]: I0117 12:16:53.179676 3059 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:16:53.181904 kubelet[3059]: W0117 12:16:53.181696 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.181904 kubelet[3059]: E0117 12:16:53.181754 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.183053 kubelet[3059]: W0117 12:16:53.182925 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-bcafed7e46&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.183053 kubelet[3059]: E0117 12:16:53.182983 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-bcafed7e46&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.183312 kubelet[3059]: I0117 12:16:53.183298 3059 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:16:53.187121 kubelet[3059]: I0117 12:16:53.187100 3059 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:16:53.188250 kubelet[3059]: W0117 12:16:53.188224 3059 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 17 12:16:53.188882 kubelet[3059]: I0117 12:16:53.188856 3059 server.go:1256] "Started kubelet" Jan 17 12:16:53.190495 kubelet[3059]: I0117 12:16:53.190467 3059 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:16:53.198779 kubelet[3059]: I0117 12:16:53.198475 3059 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:16:53.199819 kubelet[3059]: I0117 12:16:53.199796 3059 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:16:53.201601 kubelet[3059]: I0117 12:16:53.201427 3059 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:16:53.201713 kubelet[3059]: I0117 12:16:53.201693 3059 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:16:53.204163 kubelet[3059]: I0117 12:16:53.203944 3059 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:16:53.205849 kubelet[3059]: E0117 12:16:53.205818 3059 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-bcafed7e46.181b79fe04b123c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-bcafed7e46,UID:ci-4081.3.0-a-bcafed7e46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-bcafed7e46,},FirstTimestamp:2025-01-17 12:16:53.188830148 +0000 UTC m=+0.611700647,LastTimestamp:2025-01-17 12:16:53.188830148 +0000 UTC m=+0.611700647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-bcafed7e46,}" Jan 17 12:16:53.206027 kubelet[3059]: E0117 12:16:53.206005 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-bcafed7e46?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="200ms" Jan 17 12:16:53.206825 kubelet[3059]: I0117 12:16:53.206273 3059 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:16:53.210097 kubelet[3059]: W0117 12:16:53.210028 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.210188 kubelet[3059]: E0117 12:16:53.210116 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.210390 kubelet[3059]: I0117 12:16:53.210367 3059 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:16:53.210515 kubelet[3059]: I0117 12:16:53.210490 3059 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:16:53.211111 kubelet[3059]: I0117 12:16:53.211093 3059 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:16:53.214128 kubelet[3059]: I0117 12:16:53.214109 3059 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:16:53.232836 kubelet[3059]: E0117 12:16:53.232796 3059 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:16:53.257231 kubelet[3059]: I0117 12:16:53.257004 3059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:16:53.259197 kubelet[3059]: I0117 12:16:53.259141 3059 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:16:53.259397 kubelet[3059]: I0117 12:16:53.259276 3059 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:16:53.259397 kubelet[3059]: I0117 12:16:53.259305 3059 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:16:53.259910 kubelet[3059]: E0117 12:16:53.259706 3059 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:16:53.261836 kubelet[3059]: W0117 12:16:53.261745 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.262031 kubelet[3059]: E0117 12:16:53.261856 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:53.265516 kubelet[3059]: I0117 12:16:53.265480 3059 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:16:53.265584 kubelet[3059]: I0117 12:16:53.265512 3059 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:16:53.265584 kubelet[3059]: I0117 12:16:53.265542 3059 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:16:53.272641 kubelet[3059]: I0117 12:16:53.272613 3059 policy_none.go:49] "None policy: Start" Jan 17 12:16:53.273336 kubelet[3059]: I0117 12:16:53.273314 3059 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:16:53.273424 kubelet[3059]: I0117 12:16:53.273344 3059 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:16:53.282397 kubelet[3059]: I0117 12:16:53.282356 3059 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:16:53.282670 kubelet[3059]: I0117 12:16:53.282649 3059 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:16:53.290976 kubelet[3059]: E0117 12:16:53.290899 3059 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-bcafed7e46\" not found" Jan 17 12:16:53.306469 kubelet[3059]: I0117 12:16:53.306435 3059 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.307007 kubelet[3059]: E0117 12:16:53.306985 3059 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.360668 kubelet[3059]: I0117 12:16:53.360516 3059 topology_manager.go:215] "Topology Admit Handler" podUID="b02484b66f405a73923d36d6700a0f8d" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.362895 kubelet[3059]: I0117 12:16:53.362860 3059 topology_manager.go:215] "Topology Admit Handler" podUID="420adc04093fb1de560c7a2500c24130" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.364993 kubelet[3059]: I0117 12:16:53.364791 3059 topology_manager.go:215] "Topology Admit Handler" podUID="2e48ad688f285e008daad7c299dab4ee" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.407383 kubelet[3059]: E0117 12:16:53.407336 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-bcafed7e46?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="400ms" Jan 17 12:16:53.412756 kubelet[3059]: I0117 12:16:53.412680 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b02484b66f405a73923d36d6700a0f8d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" (UID: \"b02484b66f405a73923d36d6700a0f8d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.412756 kubelet[3059]: I0117 12:16:53.412740 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.412756 kubelet[3059]: I0117 12:16:53.412786 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.413037 kubelet[3059]: I0117 12:16:53.412837 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.413037 kubelet[3059]: I0117 12:16:53.412896 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.413037 kubelet[3059]: I0117 12:16:53.412932 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e48ad688f285e008daad7c299dab4ee-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-bcafed7e46\" (UID: \"2e48ad688f285e008daad7c299dab4ee\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.413037 kubelet[3059]: I0117 12:16:53.412960 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b02484b66f405a73923d36d6700a0f8d-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" (UID: \"b02484b66f405a73923d36d6700a0f8d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.413037 kubelet[3059]: I0117 12:16:53.412999 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b02484b66f405a73923d36d6700a0f8d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" (UID: \"b02484b66f405a73923d36d6700a0f8d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.413213 kubelet[3059]: I0117 12:16:53.413037 3059 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.510241 kubelet[3059]: I0117 12:16:53.510089 3059 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.511244 kubelet[3059]: E0117 12:16:53.511216 3059 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.669470 containerd[1782]: time="2025-01-17T12:16:53.669414171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-bcafed7e46,Uid:b02484b66f405a73923d36d6700a0f8d,Namespace:kube-system,Attempt:0,}" Jan 17 12:16:53.671150 containerd[1782]: time="2025-01-17T12:16:53.671104408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-bcafed7e46,Uid:420adc04093fb1de560c7a2500c24130,Namespace:kube-system,Attempt:0,}" Jan 17 12:16:53.674800 containerd[1782]: time="2025-01-17T12:16:53.674755887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-bcafed7e46,Uid:2e48ad688f285e008daad7c299dab4ee,Namespace:kube-system,Attempt:0,}" Jan 17 12:16:53.808937 kubelet[3059]: E0117 12:16:53.808813 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-bcafed7e46?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="800ms" Jan 17 12:16:53.913101 kubelet[3059]: I0117 12:16:53.913039 3059 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:53.913453 kubelet[3059]: E0117 12:16:53.913429 3059 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:54.025692 kubelet[3059]: W0117 12:16:54.025625 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-bcafed7e46&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.025856 kubelet[3059]: E0117 12:16:54.025723 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-bcafed7e46&limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.082535 kubelet[3059]: W0117 12:16:54.081151 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.082535 kubelet[3059]: E0117 12:16:54.081201 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.211363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount488282725.mount: Deactivated successfully. Jan 17 12:16:54.253532 containerd[1782]: time="2025-01-17T12:16:54.253462652Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:16:54.256853 containerd[1782]: time="2025-01-17T12:16:54.256747623Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 17 12:16:54.259373 containerd[1782]: time="2025-01-17T12:16:54.259318779Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:16:54.264275 containerd[1782]: time="2025-01-17T12:16:54.264229986Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:16:54.265682 kubelet[3059]: E0117 12:16:54.265652 3059 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.43:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.43:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-bcafed7e46.181b79fe04b123c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-bcafed7e46,UID:ci-4081.3.0-a-bcafed7e46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-bcafed7e46,},FirstTimestamp:2025-01-17 12:16:53.188830148 +0000 UTC m=+0.611700647,LastTimestamp:2025-01-17 12:16:53.188830148 +0000 UTC m=+0.611700647,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-bcafed7e46,}" Jan 17 12:16:54.266569 containerd[1782]: time="2025-01-17T12:16:54.266518035Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:16:54.271348 containerd[1782]: time="2025-01-17T12:16:54.271300239Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:16:54.274219 containerd[1782]: time="2025-01-17T12:16:54.274119500Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 17 12:16:54.280171 containerd[1782]: time="2025-01-17T12:16:54.280112931Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 17 12:16:54.281198 containerd[1782]: time="2025-01-17T12:16:54.280863547Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 606.023559ms" Jan 17 12:16:54.282346 containerd[1782]: time="2025-01-17T12:16:54.282308778Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 612.791305ms" Jan 17 12:16:54.286270 containerd[1782]: time="2025-01-17T12:16:54.286234264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 615.044654ms" Jan 17 12:16:54.338868 kubelet[3059]: W0117 12:16:54.338724 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.338868 kubelet[3059]: E0117 12:16:54.338802 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.609856 kubelet[3059]: E0117 12:16:54.609710 3059 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-bcafed7e46?timeout=10s\": dial tcp 10.200.8.43:6443: connect: connection refused" interval="1.6s" Jan 17 12:16:54.646159 kubelet[3059]: W0117 12:16:54.646036 3059 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.646159 kubelet[3059]: E0117 12:16:54.646128 3059 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.43:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:54.717057 kubelet[3059]: I0117 12:16:54.716269 3059 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:54.717057 kubelet[3059]: E0117 12:16:54.716742 3059 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.43:6443/api/v1/nodes\": dial tcp 10.200.8.43:6443: connect: connection refused" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:54.963594 containerd[1782]: time="2025-01-17T12:16:54.963459968Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:54.965307 containerd[1782]: time="2025-01-17T12:16:54.964828798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:54.965307 containerd[1782]: time="2025-01-17T12:16:54.964964600Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:54.965307 containerd[1782]: time="2025-01-17T12:16:54.965174305Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:54.966639 containerd[1782]: time="2025-01-17T12:16:54.965984823Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:54.966639 containerd[1782]: time="2025-01-17T12:16:54.966045824Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:54.966639 containerd[1782]: time="2025-01-17T12:16:54.966097325Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:54.966639 containerd[1782]: time="2025-01-17T12:16:54.966239328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:54.967086 containerd[1782]: time="2025-01-17T12:16:54.966691738Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:16:54.967086 containerd[1782]: time="2025-01-17T12:16:54.966782840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:16:54.967086 containerd[1782]: time="2025-01-17T12:16:54.966806240Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:54.967086 containerd[1782]: time="2025-01-17T12:16:54.966907543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:16:55.082802 containerd[1782]: time="2025-01-17T12:16:55.082213846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-bcafed7e46,Uid:2e48ad688f285e008daad7c299dab4ee,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bc03fc671247a9bc1cbb4d8881a5eab0a4e8d2566a6cd10c6a566e0ba13b332\"" Jan 17 12:16:55.092417 containerd[1782]: time="2025-01-17T12:16:55.091876256Z" level=info msg="CreateContainer within sandbox \"6bc03fc671247a9bc1cbb4d8881a5eab0a4e8d2566a6cd10c6a566e0ba13b332\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 17 12:16:55.095395 containerd[1782]: time="2025-01-17T12:16:55.095336631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-bcafed7e46,Uid:420adc04093fb1de560c7a2500c24130,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe1f44496323ea233ea0b7df04d0b95ec6228fcee8be4ff7f1c3e12cdab379e4\"" Jan 17 12:16:55.100087 containerd[1782]: time="2025-01-17T12:16:55.100046333Z" level=info msg="CreateContainer within sandbox \"fe1f44496323ea233ea0b7df04d0b95ec6228fcee8be4ff7f1c3e12cdab379e4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 17 12:16:55.107901 containerd[1782]: time="2025-01-17T12:16:55.107858603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-bcafed7e46,Uid:b02484b66f405a73923d36d6700a0f8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f684c3cf23f12082418a207bd7876758b7166b68cc6514efa8044c6f757589d7\"" Jan 17 12:16:55.111040 containerd[1782]: time="2025-01-17T12:16:55.111003671Z" level=info msg="CreateContainer within sandbox \"f684c3cf23f12082418a207bd7876758b7166b68cc6514efa8044c6f757589d7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 17 12:16:55.162387 containerd[1782]: time="2025-01-17T12:16:55.162326586Z" level=info msg="CreateContainer within sandbox \"6bc03fc671247a9bc1cbb4d8881a5eab0a4e8d2566a6cd10c6a566e0ba13b332\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eccd136b88558f73648fff53c540ede4ad94bc9af9c84f0d4f5c761322aa41ca\"" Jan 17 12:16:55.163177 containerd[1782]: time="2025-01-17T12:16:55.163144703Z" level=info msg="StartContainer for \"eccd136b88558f73648fff53c540ede4ad94bc9af9c84f0d4f5c761322aa41ca\"" Jan 17 12:16:55.195627 containerd[1782]: time="2025-01-17T12:16:55.194756990Z" level=info msg="CreateContainer within sandbox \"fe1f44496323ea233ea0b7df04d0b95ec6228fcee8be4ff7f1c3e12cdab379e4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"2dba6df5764669e29c266a66f696c064ee22754df4d811f078f1faa975d4c1ac\"" Jan 17 12:16:55.196812 containerd[1782]: time="2025-01-17T12:16:55.196452827Z" level=info msg="StartContainer for \"2dba6df5764669e29c266a66f696c064ee22754df4d811f078f1faa975d4c1ac\"" Jan 17 12:16:55.210998 containerd[1782]: time="2025-01-17T12:16:55.209953620Z" level=info msg="CreateContainer within sandbox \"f684c3cf23f12082418a207bd7876758b7166b68cc6514efa8044c6f757589d7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"186116758ddf904a67924b91bc7fa20f100ca9ccf7e6422bfaf67d3735206467\"" Jan 17 12:16:55.212213 containerd[1782]: time="2025-01-17T12:16:55.211960463Z" level=info msg="StartContainer for \"186116758ddf904a67924b91bc7fa20f100ca9ccf7e6422bfaf67d3735206467\"" Jan 17 12:16:55.259983 systemd[1]: run-containerd-runc-k8s.io-186116758ddf904a67924b91bc7fa20f100ca9ccf7e6422bfaf67d3735206467-runc.ETGFK6.mount: Deactivated successfully. Jan 17 12:16:55.315978 containerd[1782]: time="2025-01-17T12:16:55.314574091Z" level=info msg="StartContainer for \"eccd136b88558f73648fff53c540ede4ad94bc9af9c84f0d4f5c761322aa41ca\" returns successfully" Jan 17 12:16:55.349784 kubelet[3059]: E0117 12:16:55.349727 3059 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.43:6443: connect: connection refused Jan 17 12:16:55.360803 containerd[1782]: time="2025-01-17T12:16:55.360451487Z" level=info msg="StartContainer for \"186116758ddf904a67924b91bc7fa20f100ca9ccf7e6422bfaf67d3735206467\" returns successfully" Jan 17 12:16:55.412798 containerd[1782]: time="2025-01-17T12:16:55.411676800Z" level=info msg="StartContainer for \"2dba6df5764669e29c266a66f696c064ee22754df4d811f078f1faa975d4c1ac\" returns successfully" Jan 17 12:16:56.320319 kubelet[3059]: I0117 12:16:56.320171 3059 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:57.497983 kubelet[3059]: E0117 12:16:57.497914 3059 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-bcafed7e46\" not found" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:57.562310 kubelet[3059]: I0117 12:16:57.562088 3059 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:58.182140 kubelet[3059]: I0117 12:16:58.182083 3059 apiserver.go:52] "Watching apiserver" Jan 17 12:16:58.207556 kubelet[3059]: I0117 12:16:58.207509 3059 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:16:58.317582 kubelet[3059]: E0117 12:16:58.317463 3059 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-bcafed7e46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:58.317582 kubelet[3059]: E0117 12:16:58.317485 3059 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:16:59.029488 kubelet[3059]: W0117 12:16:59.029439 3059 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:17:00.292289 systemd[1]: Reloading requested from client PID 3332 ('systemctl') (unit session-9.scope)... Jan 17 12:17:00.292306 systemd[1]: Reloading... Jan 17 12:17:00.392797 zram_generator::config[3372]: No configuration found. Jan 17 12:17:00.523861 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 12:17:00.606196 systemd[1]: Reloading finished in 313 ms. Jan 17 12:17:00.642749 kubelet[3059]: I0117 12:17:00.642660 3059 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:17:00.643032 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:00.657242 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 12:17:00.657740 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:00.665380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 12:17:00.772238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 12:17:00.784232 (kubelet)[3449]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 12:17:00.836548 kubelet[3449]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:00.836548 kubelet[3449]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 17 12:17:00.836548 kubelet[3449]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 12:17:00.837095 kubelet[3449]: I0117 12:17:00.836598 3449 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 12:17:00.844129 kubelet[3449]: I0117 12:17:00.844095 3449 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jan 17 12:17:00.844129 kubelet[3449]: I0117 12:17:00.844122 3449 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 12:17:00.844590 kubelet[3449]: I0117 12:17:00.844325 3449 server.go:919] "Client rotation is on, will bootstrap in background" Jan 17 12:17:00.846887 kubelet[3449]: I0117 12:17:00.846848 3449 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 12:17:00.849385 kubelet[3449]: I0117 12:17:00.848962 3449 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 12:17:00.856991 kubelet[3449]: I0117 12:17:00.856817 3449 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 12:17:00.858241 kubelet[3449]: I0117 12:17:00.858130 3449 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 12:17:00.858531 kubelet[3449]: I0117 12:17:00.858453 3449 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 17 12:17:00.858531 kubelet[3449]: I0117 12:17:00.858492 3449 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 12:17:00.858531 kubelet[3449]: I0117 12:17:00.858506 3449 container_manager_linux.go:301] "Creating device plugin manager" Jan 17 12:17:00.858754 kubelet[3449]: I0117 12:17:00.858559 3449 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:00.858754 kubelet[3449]: I0117 12:17:00.858674 3449 kubelet.go:396] "Attempting to sync node with API server" Jan 17 12:17:00.858754 kubelet[3449]: I0117 12:17:00.858691 3449 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 12:17:00.858754 kubelet[3449]: I0117 12:17:00.858725 3449 kubelet.go:312] "Adding apiserver pod source" Jan 17 12:17:00.858754 kubelet[3449]: I0117 12:17:00.858745 3449 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 12:17:00.868403 kubelet[3449]: I0117 12:17:00.863223 3449 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 12:17:00.868403 kubelet[3449]: I0117 12:17:00.864129 3449 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 12:17:00.868403 kubelet[3449]: I0117 12:17:00.864608 3449 server.go:1256] "Started kubelet" Jan 17 12:17:00.875957 kubelet[3449]: I0117 12:17:00.875793 3449 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 12:17:00.884698 kubelet[3449]: I0117 12:17:00.884675 3449 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 12:17:00.886567 kubelet[3449]: I0117 12:17:00.886550 3449 server.go:461] "Adding debug handlers to kubelet server" Jan 17 12:17:00.890937 kubelet[3449]: I0117 12:17:00.890903 3449 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 12:17:00.891309 kubelet[3449]: I0117 12:17:00.891295 3449 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 12:17:00.892975 kubelet[3449]: I0117 12:17:00.892956 3449 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 17 12:17:00.898798 kubelet[3449]: I0117 12:17:00.893133 3449 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jan 17 12:17:00.898931 kubelet[3449]: I0117 12:17:00.898916 3449 reconciler_new.go:29] "Reconciler: start to sync state" Jan 17 12:17:00.899801 kubelet[3449]: I0117 12:17:00.899783 3449 factory.go:221] Registration of the systemd container factory successfully Jan 17 12:17:00.900085 kubelet[3449]: I0117 12:17:00.900063 3449 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 12:17:00.908123 kubelet[3449]: I0117 12:17:00.908044 3449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 12:17:00.910156 kubelet[3449]: I0117 12:17:00.909829 3449 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 12:17:00.910156 kubelet[3449]: I0117 12:17:00.909859 3449 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 17 12:17:00.910156 kubelet[3449]: I0117 12:17:00.909879 3449 kubelet.go:2329] "Starting kubelet main sync loop" Jan 17 12:17:00.910156 kubelet[3449]: E0117 12:17:00.909946 3449 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 12:17:00.914312 kubelet[3449]: I0117 12:17:00.914288 3449 factory.go:221] Registration of the containerd container factory successfully Jan 17 12:17:00.920531 kubelet[3449]: E0117 12:17:00.920421 3449 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 12:17:00.981817 kubelet[3449]: I0117 12:17:00.981784 3449 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 17 12:17:00.981817 kubelet[3449]: I0117 12:17:00.981808 3449 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 17 12:17:00.981817 kubelet[3449]: I0117 12:17:00.981829 3449 state_mem.go:36] "Initialized new in-memory state store" Jan 17 12:17:00.982061 kubelet[3449]: I0117 12:17:00.982010 3449 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 12:17:00.982061 kubelet[3449]: I0117 12:17:00.982039 3449 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 12:17:00.982061 kubelet[3449]: I0117 12:17:00.982050 3449 policy_none.go:49] "None policy: Start" Jan 17 12:17:00.982903 kubelet[3449]: I0117 12:17:00.982877 3449 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 17 12:17:00.983030 kubelet[3449]: I0117 12:17:00.982911 3449 state_mem.go:35] "Initializing new in-memory state store" Jan 17 12:17:00.983157 kubelet[3449]: I0117 12:17:00.983137 3449 state_mem.go:75] "Updated machine memory state" Jan 17 12:17:00.987070 kubelet[3449]: I0117 12:17:00.984382 3449 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 12:17:00.987070 kubelet[3449]: I0117 12:17:00.984672 3449 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 12:17:00.997571 kubelet[3449]: I0117 12:17:00.997539 3449 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.010185 kubelet[3449]: I0117 12:17:01.010153 3449 topology_manager.go:215] "Topology Admit Handler" podUID="b02484b66f405a73923d36d6700a0f8d" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.010564 kubelet[3449]: I0117 12:17:01.010544 3449 topology_manager.go:215] "Topology Admit Handler" podUID="420adc04093fb1de560c7a2500c24130" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.010733 kubelet[3449]: I0117 12:17:01.010713 3449 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.010842 kubelet[3449]: I0117 12:17:01.010830 3449 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.011552 kubelet[3449]: I0117 12:17:01.010718 3449 topology_manager.go:215] "Topology Admit Handler" podUID="2e48ad688f285e008daad7c299dab4ee" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.025086 kubelet[3449]: W0117 12:17:01.025056 3449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:17:01.025757 kubelet[3449]: W0117 12:17:01.025736 3449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:17:01.025994 kubelet[3449]: W0117 12:17:01.025775 3449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:17:01.026170 kubelet[3449]: E0117 12:17:01.026156 3449 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200527 kubelet[3449]: I0117 12:17:01.200410 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b02484b66f405a73923d36d6700a0f8d-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" (UID: \"b02484b66f405a73923d36d6700a0f8d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200527 kubelet[3449]: I0117 12:17:01.200482 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b02484b66f405a73923d36d6700a0f8d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" (UID: \"b02484b66f405a73923d36d6700a0f8d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200835 kubelet[3449]: I0117 12:17:01.200577 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200835 kubelet[3449]: I0117 12:17:01.200648 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200835 kubelet[3449]: I0117 12:17:01.200685 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200835 kubelet[3449]: I0117 12:17:01.200743 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.200835 kubelet[3449]: I0117 12:17:01.200802 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2e48ad688f285e008daad7c299dab4ee-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-bcafed7e46\" (UID: \"2e48ad688f285e008daad7c299dab4ee\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.201157 kubelet[3449]: I0117 12:17:01.200838 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b02484b66f405a73923d36d6700a0f8d-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" (UID: \"b02484b66f405a73923d36d6700a0f8d\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.201157 kubelet[3449]: I0117 12:17:01.200888 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/420adc04093fb1de560c7a2500c24130-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" (UID: \"420adc04093fb1de560c7a2500c24130\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.861777 kubelet[3449]: I0117 12:17:01.861721 3449 apiserver.go:52] "Watching apiserver" Jan 17 12:17:01.899162 kubelet[3449]: I0117 12:17:01.899048 3449 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jan 17 12:17:01.989979 kubelet[3449]: W0117 12:17:01.989937 3449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:17:01.990175 kubelet[3449]: E0117 12:17:01.990039 3449 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-bcafed7e46\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:01.992796 kubelet[3449]: W0117 12:17:01.990720 3449 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 17 12:17:01.992796 kubelet[3449]: E0117 12:17:01.990812 3449 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.3.0-a-bcafed7e46\" already exists" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:02.028562 kubelet[3449]: I0117 12:17:02.028525 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-bcafed7e46" podStartSLOduration=1.02847077 podStartE2EDuration="1.02847077s" podCreationTimestamp="2025-01-17 12:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:02.028070861 +0000 UTC m=+1.238672699" watchObservedRunningTime="2025-01-17 12:17:02.02847077 +0000 UTC m=+1.239072608" Jan 17 12:17:02.058518 kubelet[3449]: I0117 12:17:02.058415 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-bcafed7e46" podStartSLOduration=1.058366232 podStartE2EDuration="1.058366232s" podCreationTimestamp="2025-01-17 12:17:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:02.058094026 +0000 UTC m=+1.268695964" watchObservedRunningTime="2025-01-17 12:17:02.058366232 +0000 UTC m=+1.268968070" Jan 17 12:17:02.118916 kubelet[3449]: I0117 12:17:02.117198 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-bcafed7e46" podStartSLOduration=3.116298615 podStartE2EDuration="3.116298615s" podCreationTimestamp="2025-01-17 12:16:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:02.115908706 +0000 UTC m=+1.326510544" watchObservedRunningTime="2025-01-17 12:17:02.116298615 +0000 UTC m=+1.326900453" Jan 17 12:17:06.017993 sudo[2485]: pam_unix(sudo:session): session closed for user root Jan 17 12:17:06.124620 sshd[2481]: pam_unix(sshd:session): session closed for user core Jan 17 12:17:06.128104 systemd[1]: sshd@6-10.200.8.43:22-10.200.16.10:49036.service: Deactivated successfully. Jan 17 12:17:06.132672 systemd-logind[1758]: Session 9 logged out. Waiting for processes to exit. Jan 17 12:17:06.133906 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 12:17:06.136552 systemd-logind[1758]: Removed session 9. Jan 17 12:17:15.350331 kubelet[3449]: I0117 12:17:15.350298 3449 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 12:17:15.350960 containerd[1782]: time="2025-01-17T12:17:15.350738261Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 12:17:15.351341 kubelet[3449]: I0117 12:17:15.350979 3449 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 12:17:15.944670 kubelet[3449]: I0117 12:17:15.943433 3449 topology_manager.go:215] "Topology Admit Handler" podUID="cefadbe9-edbd-48fe-874e-3836f5609d77" podNamespace="kube-system" podName="kube-proxy-rz58p" Jan 17 12:17:15.995576 kubelet[3449]: I0117 12:17:15.995510 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xk72v\" (UniqueName: \"kubernetes.io/projected/cefadbe9-edbd-48fe-874e-3836f5609d77-kube-api-access-xk72v\") pod \"kube-proxy-rz58p\" (UID: \"cefadbe9-edbd-48fe-874e-3836f5609d77\") " pod="kube-system/kube-proxy-rz58p" Jan 17 12:17:15.995576 kubelet[3449]: I0117 12:17:15.995563 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cefadbe9-edbd-48fe-874e-3836f5609d77-kube-proxy\") pod \"kube-proxy-rz58p\" (UID: \"cefadbe9-edbd-48fe-874e-3836f5609d77\") " pod="kube-system/kube-proxy-rz58p" Jan 17 12:17:15.995850 kubelet[3449]: I0117 12:17:15.995614 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cefadbe9-edbd-48fe-874e-3836f5609d77-lib-modules\") pod \"kube-proxy-rz58p\" (UID: \"cefadbe9-edbd-48fe-874e-3836f5609d77\") " pod="kube-system/kube-proxy-rz58p" Jan 17 12:17:15.995850 kubelet[3449]: I0117 12:17:15.995645 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cefadbe9-edbd-48fe-874e-3836f5609d77-xtables-lock\") pod \"kube-proxy-rz58p\" (UID: \"cefadbe9-edbd-48fe-874e-3836f5609d77\") " pod="kube-system/kube-proxy-rz58p" Jan 17 12:17:16.105701 kubelet[3449]: E0117 12:17:16.105622 3449 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 17 12:17:16.105701 kubelet[3449]: E0117 12:17:16.105668 3449 projected.go:200] Error preparing data for projected volume kube-api-access-xk72v for pod kube-system/kube-proxy-rz58p: configmap "kube-root-ca.crt" not found Jan 17 12:17:16.105951 kubelet[3449]: E0117 12:17:16.105771 3449 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/cefadbe9-edbd-48fe-874e-3836f5609d77-kube-api-access-xk72v podName:cefadbe9-edbd-48fe-874e-3836f5609d77 nodeName:}" failed. No retries permitted until 2025-01-17 12:17:16.605734318 +0000 UTC m=+15.816336256 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xk72v" (UniqueName: "kubernetes.io/projected/cefadbe9-edbd-48fe-874e-3836f5609d77-kube-api-access-xk72v") pod "kube-proxy-rz58p" (UID: "cefadbe9-edbd-48fe-874e-3836f5609d77") : configmap "kube-root-ca.crt" not found Jan 17 12:17:16.423122 kubelet[3449]: I0117 12:17:16.421685 3449 topology_manager.go:215] "Topology Admit Handler" podUID="10b33d47-08c3-468d-abbb-7f0a101b0453" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-kdbxj" Jan 17 12:17:16.500181 kubelet[3449]: I0117 12:17:16.500106 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/10b33d47-08c3-468d-abbb-7f0a101b0453-var-lib-calico\") pod \"tigera-operator-c7ccbd65-kdbxj\" (UID: \"10b33d47-08c3-468d-abbb-7f0a101b0453\") " pod="tigera-operator/tigera-operator-c7ccbd65-kdbxj" Jan 17 12:17:16.500181 kubelet[3449]: I0117 12:17:16.500170 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gbm8\" (UniqueName: \"kubernetes.io/projected/10b33d47-08c3-468d-abbb-7f0a101b0453-kube-api-access-2gbm8\") pod \"tigera-operator-c7ccbd65-kdbxj\" (UID: \"10b33d47-08c3-468d-abbb-7f0a101b0453\") " pod="tigera-operator/tigera-operator-c7ccbd65-kdbxj" Jan 17 12:17:16.729485 containerd[1782]: time="2025-01-17T12:17:16.729361803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-kdbxj,Uid:10b33d47-08c3-468d-abbb-7f0a101b0453,Namespace:tigera-operator,Attempt:0,}" Jan 17 12:17:16.850087 containerd[1782]: time="2025-01-17T12:17:16.850018625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rz58p,Uid:cefadbe9-edbd-48fe-874e-3836f5609d77,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:17.845101 containerd[1782]: time="2025-01-17T12:17:17.844537935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:17.845601 containerd[1782]: time="2025-01-17T12:17:17.845410154Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:17.845601 containerd[1782]: time="2025-01-17T12:17:17.845477155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:17.845706 containerd[1782]: time="2025-01-17T12:17:17.845592458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:17.849874 containerd[1782]: time="2025-01-17T12:17:17.846686382Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:17.849874 containerd[1782]: time="2025-01-17T12:17:17.848707727Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:17.849874 containerd[1782]: time="2025-01-17T12:17:17.848754928Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:17.849874 containerd[1782]: time="2025-01-17T12:17:17.848892931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:17.907790 containerd[1782]: time="2025-01-17T12:17:17.907394623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rz58p,Uid:cefadbe9-edbd-48fe-874e-3836f5609d77,Namespace:kube-system,Attempt:0,} returns sandbox id \"9af04bc4453fdb5323832aeabf81c70eb1ca2a3dbf3f42f4a41c58f1007eec96\"" Jan 17 12:17:17.913399 containerd[1782]: time="2025-01-17T12:17:17.913355255Z" level=info msg="CreateContainer within sandbox \"9af04bc4453fdb5323832aeabf81c70eb1ca2a3dbf3f42f4a41c58f1007eec96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 12:17:17.932292 containerd[1782]: time="2025-01-17T12:17:17.932251072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-kdbxj,Uid:10b33d47-08c3-468d-abbb-7f0a101b0453,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"5b0354742ad179caa5e7722a7b4cfef0e0a2ff7c17de63745589708f37443cfc\"" Jan 17 12:17:17.933648 containerd[1782]: time="2025-01-17T12:17:17.933606202Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 17 12:17:17.960940 containerd[1782]: time="2025-01-17T12:17:17.960851504Z" level=info msg="CreateContainer within sandbox \"9af04bc4453fdb5323832aeabf81c70eb1ca2a3dbf3f42f4a41c58f1007eec96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"80a801d1be86677fbb2950c51aca7babed698423c6261a2fa2c1936c4c687728\"" Jan 17 12:17:17.961899 containerd[1782]: time="2025-01-17T12:17:17.961622621Z" level=info msg="StartContainer for \"80a801d1be86677fbb2950c51aca7babed698423c6261a2fa2c1936c4c687728\"" Jan 17 12:17:18.025007 containerd[1782]: time="2025-01-17T12:17:18.024962120Z" level=info msg="StartContainer for \"80a801d1be86677fbb2950c51aca7babed698423c6261a2fa2c1936c4c687728\" returns successfully" Jan 17 12:17:18.815758 systemd[1]: run-containerd-runc-k8s.io-5b0354742ad179caa5e7722a7b4cfef0e0a2ff7c17de63745589708f37443cfc-runc.CbZOyL.mount: Deactivated successfully. Jan 17 12:17:19.003151 kubelet[3449]: I0117 12:17:19.003096 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rz58p" podStartSLOduration=4.003042725 podStartE2EDuration="4.003042725s" podCreationTimestamp="2025-01-17 12:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:19.002693517 +0000 UTC m=+18.213295355" watchObservedRunningTime="2025-01-17 12:17:19.003042725 +0000 UTC m=+18.213644563" Jan 17 12:17:24.804671 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66826940.mount: Deactivated successfully. Jan 17 12:17:25.479077 containerd[1782]: time="2025-01-17T12:17:25.478975173Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:25.481780 containerd[1782]: time="2025-01-17T12:17:25.481686632Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764281" Jan 17 12:17:25.485058 containerd[1782]: time="2025-01-17T12:17:25.484922104Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:25.490269 containerd[1782]: time="2025-01-17T12:17:25.490216621Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:25.491006 containerd[1782]: time="2025-01-17T12:17:25.490967237Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 7.557314934s" Jan 17 12:17:25.491095 containerd[1782]: time="2025-01-17T12:17:25.491009038Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 17 12:17:25.493671 containerd[1782]: time="2025-01-17T12:17:25.493638096Z" level=info msg="CreateContainer within sandbox \"5b0354742ad179caa5e7722a7b4cfef0e0a2ff7c17de63745589708f37443cfc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 12:17:25.526473 containerd[1782]: time="2025-01-17T12:17:25.526432721Z" level=info msg="CreateContainer within sandbox \"5b0354742ad179caa5e7722a7b4cfef0e0a2ff7c17de63745589708f37443cfc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1e8dfb53491b7961c487383857761fc779d7386d6783862a6ccca0bec20015e8\"" Jan 17 12:17:25.528486 containerd[1782]: time="2025-01-17T12:17:25.527093135Z" level=info msg="StartContainer for \"1e8dfb53491b7961c487383857761fc779d7386d6783862a6ccca0bec20015e8\"" Jan 17 12:17:25.584269 containerd[1782]: time="2025-01-17T12:17:25.583989392Z" level=info msg="StartContainer for \"1e8dfb53491b7961c487383857761fc779d7386d6783862a6ccca0bec20015e8\" returns successfully" Jan 17 12:17:28.617941 kubelet[3449]: I0117 12:17:28.617880 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-kdbxj" podStartSLOduration=5.059754735 podStartE2EDuration="12.617817486s" podCreationTimestamp="2025-01-17 12:17:16 +0000 UTC" firstStartedPulling="2025-01-17 12:17:17.933193393 +0000 UTC m=+17.143795231" lastFinishedPulling="2025-01-17 12:17:25.491256144 +0000 UTC m=+24.701857982" observedRunningTime="2025-01-17 12:17:26.01344729 +0000 UTC m=+25.224049228" watchObservedRunningTime="2025-01-17 12:17:28.617817486 +0000 UTC m=+27.828419324" Jan 17 12:17:28.618561 kubelet[3449]: I0117 12:17:28.618294 3449 topology_manager.go:215] "Topology Admit Handler" podUID="ed066e79-bda0-48d8-b292-805d704f7c86" podNamespace="calico-system" podName="calico-typha-5c7fbf4f6f-xjxw4" Jan 17 12:17:28.680217 kubelet[3449]: I0117 12:17:28.679978 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ed066e79-bda0-48d8-b292-805d704f7c86-tigera-ca-bundle\") pod \"calico-typha-5c7fbf4f6f-xjxw4\" (UID: \"ed066e79-bda0-48d8-b292-805d704f7c86\") " pod="calico-system/calico-typha-5c7fbf4f6f-xjxw4" Jan 17 12:17:28.680217 kubelet[3449]: I0117 12:17:28.680047 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ed066e79-bda0-48d8-b292-805d704f7c86-typha-certs\") pod \"calico-typha-5c7fbf4f6f-xjxw4\" (UID: \"ed066e79-bda0-48d8-b292-805d704f7c86\") " pod="calico-system/calico-typha-5c7fbf4f6f-xjxw4" Jan 17 12:17:28.680217 kubelet[3449]: I0117 12:17:28.680086 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-svdhf\" (UniqueName: \"kubernetes.io/projected/ed066e79-bda0-48d8-b292-805d704f7c86-kube-api-access-svdhf\") pod \"calico-typha-5c7fbf4f6f-xjxw4\" (UID: \"ed066e79-bda0-48d8-b292-805d704f7c86\") " pod="calico-system/calico-typha-5c7fbf4f6f-xjxw4" Jan 17 12:17:28.863960 kubelet[3449]: I0117 12:17:28.863827 3449 topology_manager.go:215] "Topology Admit Handler" podUID="798ca213-bdbd-46e5-8465-0448d4218cbd" podNamespace="calico-system" podName="calico-node-qctfl" Jan 17 12:17:28.929519 containerd[1782]: time="2025-01-17T12:17:28.929384577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c7fbf4f6f-xjxw4,Uid:ed066e79-bda0-48d8-b292-805d704f7c86,Namespace:calico-system,Attempt:0,}" Jan 17 12:17:28.983173 kubelet[3449]: I0117 12:17:28.981732 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-policysync\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983173 kubelet[3449]: I0117 12:17:28.981818 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-cni-net-dir\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983173 kubelet[3449]: I0117 12:17:28.981856 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/798ca213-bdbd-46e5-8465-0448d4218cbd-tigera-ca-bundle\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983173 kubelet[3449]: I0117 12:17:28.981885 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-var-run-calico\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983173 kubelet[3449]: I0117 12:17:28.981922 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-cni-bin-dir\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983531 kubelet[3449]: I0117 12:17:28.981952 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/798ca213-bdbd-46e5-8465-0448d4218cbd-node-certs\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983531 kubelet[3449]: I0117 12:17:28.981981 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-xtables-lock\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983531 kubelet[3449]: I0117 12:17:28.982011 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-flexvol-driver-host\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983531 kubelet[3449]: I0117 12:17:28.982041 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vk9qz\" (UniqueName: \"kubernetes.io/projected/798ca213-bdbd-46e5-8465-0448d4218cbd-kube-api-access-vk9qz\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.983531 kubelet[3449]: I0117 12:17:28.982072 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-lib-modules\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.984997 kubelet[3449]: I0117 12:17:28.982100 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-var-lib-calico\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:28.984997 kubelet[3449]: I0117 12:17:28.982129 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/798ca213-bdbd-46e5-8465-0448d4218cbd-cni-log-dir\") pod \"calico-node-qctfl\" (UID: \"798ca213-bdbd-46e5-8465-0448d4218cbd\") " pod="calico-system/calico-node-qctfl" Jan 17 12:17:29.007222 kubelet[3449]: I0117 12:17:29.007133 3449 topology_manager.go:215] "Topology Admit Handler" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" podNamespace="calico-system" podName="csi-node-driver-8wmfr" Jan 17 12:17:29.007691 kubelet[3449]: E0117 12:17:29.007498 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:29.012891 containerd[1782]: time="2025-01-17T12:17:29.012746020Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:29.013560 containerd[1782]: time="2025-01-17T12:17:29.013064927Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:29.013783 containerd[1782]: time="2025-01-17T12:17:29.013733842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:29.014094 containerd[1782]: time="2025-01-17T12:17:29.014060349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:29.082890 kubelet[3449]: I0117 12:17:29.082845 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a3abf411-90ba-45ad-b3b8-494831f9b2d4-kubelet-dir\") pod \"csi-node-driver-8wmfr\" (UID: \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\") " pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:29.084065 kubelet[3449]: I0117 12:17:29.083434 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzlv8\" (UniqueName: \"kubernetes.io/projected/a3abf411-90ba-45ad-b3b8-494831f9b2d4-kube-api-access-gzlv8\") pod \"csi-node-driver-8wmfr\" (UID: \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\") " pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:29.084065 kubelet[3449]: I0117 12:17:29.083496 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a3abf411-90ba-45ad-b3b8-494831f9b2d4-socket-dir\") pod \"csi-node-driver-8wmfr\" (UID: \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\") " pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:29.084065 kubelet[3449]: I0117 12:17:29.083577 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a3abf411-90ba-45ad-b3b8-494831f9b2d4-varrun\") pod \"csi-node-driver-8wmfr\" (UID: \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\") " pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:29.084065 kubelet[3449]: I0117 12:17:29.083612 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a3abf411-90ba-45ad-b3b8-494831f9b2d4-registration-dir\") pod \"csi-node-driver-8wmfr\" (UID: \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\") " pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:29.089042 kubelet[3449]: E0117 12:17:29.088701 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.089042 kubelet[3449]: W0117 12:17:29.088722 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.089042 kubelet[3449]: E0117 12:17:29.088752 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.090254 kubelet[3449]: E0117 12:17:29.089991 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.090254 kubelet[3449]: W0117 12:17:29.090007 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.090254 kubelet[3449]: E0117 12:17:29.090042 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.091479 kubelet[3449]: E0117 12:17:29.091075 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.091479 kubelet[3449]: W0117 12:17:29.091094 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.091479 kubelet[3449]: E0117 12:17:29.091218 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.091659 kubelet[3449]: E0117 12:17:29.091633 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.091659 kubelet[3449]: W0117 12:17:29.091645 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.091741 kubelet[3449]: E0117 12:17:29.091663 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.092258 kubelet[3449]: E0117 12:17:29.092150 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.092258 kubelet[3449]: W0117 12:17:29.092162 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.092373 kubelet[3449]: E0117 12:17:29.092279 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.093565 kubelet[3449]: E0117 12:17:29.093006 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.093565 kubelet[3449]: W0117 12:17:29.093023 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.093565 kubelet[3449]: E0117 12:17:29.093042 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.098913 kubelet[3449]: E0117 12:17:29.098895 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.098913 kubelet[3449]: W0117 12:17:29.098913 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.099071 kubelet[3449]: E0117 12:17:29.098944 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.100155 kubelet[3449]: E0117 12:17:29.100098 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.100155 kubelet[3449]: W0117 12:17:29.100116 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.100155 kubelet[3449]: E0117 12:17:29.100133 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.116301 kubelet[3449]: E0117 12:17:29.116277 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.116498 kubelet[3449]: W0117 12:17:29.116341 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.116498 kubelet[3449]: E0117 12:17:29.116366 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.152345 containerd[1782]: time="2025-01-17T12:17:29.152234405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5c7fbf4f6f-xjxw4,Uid:ed066e79-bda0-48d8-b292-805d704f7c86,Namespace:calico-system,Attempt:0,} returns sandbox id \"c364b7f4bfca82098949119ada1ac40676ab4b2b4c0e5af4415c315d1390b9c8\"" Jan 17 12:17:29.155004 containerd[1782]: time="2025-01-17T12:17:29.154960465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 17 12:17:29.173520 containerd[1782]: time="2025-01-17T12:17:29.173465375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qctfl,Uid:798ca213-bdbd-46e5-8465-0448d4218cbd,Namespace:calico-system,Attempt:0,}" Jan 17 12:17:29.184624 kubelet[3449]: E0117 12:17:29.184258 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.184624 kubelet[3449]: W0117 12:17:29.184282 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.184624 kubelet[3449]: E0117 12:17:29.184336 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.186809 kubelet[3449]: E0117 12:17:29.186787 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.187438 kubelet[3449]: W0117 12:17:29.186810 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.187438 kubelet[3449]: E0117 12:17:29.186847 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.188309 kubelet[3449]: E0117 12:17:29.188034 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.188309 kubelet[3449]: W0117 12:17:29.188052 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.188309 kubelet[3449]: E0117 12:17:29.188080 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.189024 kubelet[3449]: E0117 12:17:29.188748 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.189024 kubelet[3449]: W0117 12:17:29.188786 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.189024 kubelet[3449]: E0117 12:17:29.188960 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.189704 kubelet[3449]: E0117 12:17:29.189683 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.189704 kubelet[3449]: W0117 12:17:29.189697 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.190269 kubelet[3449]: E0117 12:17:29.190084 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.190269 kubelet[3449]: E0117 12:17:29.190185 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.190269 kubelet[3449]: W0117 12:17:29.190195 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.190440 kubelet[3449]: E0117 12:17:29.190284 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.190488 kubelet[3449]: E0117 12:17:29.190445 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.190488 kubelet[3449]: W0117 12:17:29.190454 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.190599 kubelet[3449]: E0117 12:17:29.190543 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.190926 kubelet[3449]: E0117 12:17:29.190657 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.190926 kubelet[3449]: W0117 12:17:29.190668 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.190926 kubelet[3449]: E0117 12:17:29.190756 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.191727 kubelet[3449]: E0117 12:17:29.191001 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.191727 kubelet[3449]: W0117 12:17:29.191012 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.191727 kubelet[3449]: E0117 12:17:29.191104 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.191727 kubelet[3449]: E0117 12:17:29.191236 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.191727 kubelet[3449]: W0117 12:17:29.191259 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.191727 kubelet[3449]: E0117 12:17:29.191287 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.191834 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.192177 kubelet[3449]: W0117 12:17:29.191847 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.191933 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.192192 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.192177 kubelet[3449]: W0117 12:17:29.192203 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.192475 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.192614 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.192177 kubelet[3449]: W0117 12:17:29.192624 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.192724 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.192177 kubelet[3449]: E0117 12:17:29.192886 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.194423 kubelet[3449]: W0117 12:17:29.192895 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.194423 kubelet[3449]: E0117 12:17:29.192980 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.194423 kubelet[3449]: E0117 12:17:29.193100 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.194423 kubelet[3449]: W0117 12:17:29.193108 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.194423 kubelet[3449]: E0117 12:17:29.193196 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.194423 kubelet[3449]: E0117 12:17:29.193379 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.194423 kubelet[3449]: W0117 12:17:29.193389 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.194423 kubelet[3449]: E0117 12:17:29.193444 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.194423 kubelet[3449]: E0117 12:17:29.193649 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.194423 kubelet[3449]: W0117 12:17:29.193658 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.193690 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.194215 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.196821 kubelet[3449]: W0117 12:17:29.194227 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.194318 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.194824 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.196821 kubelet[3449]: W0117 12:17:29.194836 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.194925 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.195202 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.196821 kubelet[3449]: W0117 12:17:29.195213 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.196821 kubelet[3449]: E0117 12:17:29.195320 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.195462 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.199576 kubelet[3449]: W0117 12:17:29.195482 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.196413 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.196707 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.199576 kubelet[3449]: W0117 12:17:29.196720 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.196815 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.197917 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.199576 kubelet[3449]: W0117 12:17:29.197929 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.198112 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.199576 kubelet[3449]: E0117 12:17:29.198344 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.199981 kubelet[3449]: W0117 12:17:29.198355 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.199981 kubelet[3449]: E0117 12:17:29.198545 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.199981 kubelet[3449]: E0117 12:17:29.199570 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.199981 kubelet[3449]: W0117 12:17:29.199582 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.199981 kubelet[3449]: E0117 12:17:29.199599 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.206494 kubelet[3449]: E0117 12:17:29.206413 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:29.206494 kubelet[3449]: W0117 12:17:29.206429 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:29.206494 kubelet[3449]: E0117 12:17:29.206470 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:29.232098 containerd[1782]: time="2025-01-17T12:17:29.232000069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:29.232262 containerd[1782]: time="2025-01-17T12:17:29.232118772Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:29.232262 containerd[1782]: time="2025-01-17T12:17:29.232154372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:29.233208 containerd[1782]: time="2025-01-17T12:17:29.232311876Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:29.282990 containerd[1782]: time="2025-01-17T12:17:29.282595088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qctfl,Uid:798ca213-bdbd-46e5-8465-0448d4218cbd,Namespace:calico-system,Attempt:0,} returns sandbox id \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\"" Jan 17 12:17:30.335043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4066405786.mount: Deactivated successfully. Jan 17 12:17:30.912204 kubelet[3449]: E0117 12:17:30.911337 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:31.200671 containerd[1782]: time="2025-01-17T12:17:31.200520504Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.204044 containerd[1782]: time="2025-01-17T12:17:31.203973080Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 17 12:17:31.209099 containerd[1782]: time="2025-01-17T12:17:31.209009591Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.215588 containerd[1782]: time="2025-01-17T12:17:31.215378132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:31.218534 containerd[1782]: time="2025-01-17T12:17:31.218495601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.063485135s" Jan 17 12:17:31.218734 containerd[1782]: time="2025-01-17T12:17:31.218551402Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 17 12:17:31.235829 containerd[1782]: time="2025-01-17T12:17:31.234880863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 17 12:17:31.247598 containerd[1782]: time="2025-01-17T12:17:31.247415441Z" level=info msg="CreateContainer within sandbox \"c364b7f4bfca82098949119ada1ac40676ab4b2b4c0e5af4415c315d1390b9c8\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 12:17:31.291379 containerd[1782]: time="2025-01-17T12:17:31.291332812Z" level=info msg="CreateContainer within sandbox \"c364b7f4bfca82098949119ada1ac40676ab4b2b4c0e5af4415c315d1390b9c8\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bc0dfd59cc42028f2fa8be4743ec4cd1dbefe3bfcc1369c62545c476e1448072\"" Jan 17 12:17:31.291991 containerd[1782]: time="2025-01-17T12:17:31.291958226Z" level=info msg="StartContainer for \"bc0dfd59cc42028f2fa8be4743ec4cd1dbefe3bfcc1369c62545c476e1448072\"" Jan 17 12:17:31.371466 containerd[1782]: time="2025-01-17T12:17:31.371409883Z" level=info msg="StartContainer for \"bc0dfd59cc42028f2fa8be4743ec4cd1dbefe3bfcc1369c62545c476e1448072\" returns successfully" Jan 17 12:17:32.043020 kubelet[3449]: I0117 12:17:32.042972 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-5c7fbf4f6f-xjxw4" podStartSLOduration=1.9773859539999998 podStartE2EDuration="4.042927934s" podCreationTimestamp="2025-01-17 12:17:28 +0000 UTC" firstStartedPulling="2025-01-17 12:17:29.154452754 +0000 UTC m=+28.365054592" lastFinishedPulling="2025-01-17 12:17:31.219994734 +0000 UTC m=+30.430596572" observedRunningTime="2025-01-17 12:17:32.042404922 +0000 UTC m=+31.253006760" watchObservedRunningTime="2025-01-17 12:17:32.042927934 +0000 UTC m=+31.253529772" Jan 17 12:17:32.082900 kubelet[3449]: E0117 12:17:32.082852 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.082900 kubelet[3449]: W0117 12:17:32.082881 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.082900 kubelet[3449]: E0117 12:17:32.082917 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.083375 kubelet[3449]: E0117 12:17:32.083203 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.083375 kubelet[3449]: W0117 12:17:32.083219 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.083375 kubelet[3449]: E0117 12:17:32.083242 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.083636 kubelet[3449]: E0117 12:17:32.083515 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.083636 kubelet[3449]: W0117 12:17:32.083528 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.083636 kubelet[3449]: E0117 12:17:32.083548 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.083992 kubelet[3449]: E0117 12:17:32.083789 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.083992 kubelet[3449]: W0117 12:17:32.083802 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.083992 kubelet[3449]: E0117 12:17:32.083821 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.084264 kubelet[3449]: E0117 12:17:32.084060 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.084264 kubelet[3449]: W0117 12:17:32.084085 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.084264 kubelet[3449]: E0117 12:17:32.084106 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.084480 kubelet[3449]: E0117 12:17:32.084343 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.084480 kubelet[3449]: W0117 12:17:32.084381 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.084480 kubelet[3449]: E0117 12:17:32.084402 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.084747 kubelet[3449]: E0117 12:17:32.084641 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.084747 kubelet[3449]: W0117 12:17:32.084654 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.084747 kubelet[3449]: E0117 12:17:32.084673 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.085064 kubelet[3449]: E0117 12:17:32.084932 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.085064 kubelet[3449]: W0117 12:17:32.084948 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.085064 kubelet[3449]: E0117 12:17:32.084967 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.085351 kubelet[3449]: E0117 12:17:32.085200 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.085351 kubelet[3449]: W0117 12:17:32.085212 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.085351 kubelet[3449]: E0117 12:17:32.085232 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.085634 kubelet[3449]: E0117 12:17:32.085443 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.085634 kubelet[3449]: W0117 12:17:32.085455 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.085634 kubelet[3449]: E0117 12:17:32.085474 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.085959 kubelet[3449]: E0117 12:17:32.085684 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.085959 kubelet[3449]: W0117 12:17:32.085696 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.085959 kubelet[3449]: E0117 12:17:32.085716 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.085959 kubelet[3449]: E0117 12:17:32.085950 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.086222 kubelet[3449]: W0117 12:17:32.085963 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.086222 kubelet[3449]: E0117 12:17:32.085984 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.086222 kubelet[3449]: E0117 12:17:32.086211 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.086222 kubelet[3449]: W0117 12:17:32.086223 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.086572 kubelet[3449]: E0117 12:17:32.086241 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.086572 kubelet[3449]: E0117 12:17:32.086497 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.086572 kubelet[3449]: W0117 12:17:32.086511 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.086572 kubelet[3449]: E0117 12:17:32.086529 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.086891 kubelet[3449]: E0117 12:17:32.086745 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.086891 kubelet[3449]: W0117 12:17:32.086757 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.086891 kubelet[3449]: E0117 12:17:32.086789 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.112355 kubelet[3449]: E0117 12:17:32.112318 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.112355 kubelet[3449]: W0117 12:17:32.112348 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.112799 kubelet[3449]: E0117 12:17:32.112380 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.112799 kubelet[3449]: E0117 12:17:32.112783 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.112799 kubelet[3449]: W0117 12:17:32.112801 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.113161 kubelet[3449]: E0117 12:17:32.112832 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.113161 kubelet[3449]: E0117 12:17:32.113130 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.113161 kubelet[3449]: W0117 12:17:32.113144 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.113434 kubelet[3449]: E0117 12:17:32.113178 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.113524 kubelet[3449]: E0117 12:17:32.113500 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.113524 kubelet[3449]: W0117 12:17:32.113514 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.113716 kubelet[3449]: E0117 12:17:32.113542 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.113803 kubelet[3449]: E0117 12:17:32.113793 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.113877 kubelet[3449]: W0117 12:17:32.113807 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.113938 kubelet[3449]: E0117 12:17:32.113906 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.114129 kubelet[3449]: E0117 12:17:32.114101 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.114129 kubelet[3449]: W0117 12:17:32.114116 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.114320 kubelet[3449]: E0117 12:17:32.114229 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.114410 kubelet[3449]: E0117 12:17:32.114390 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.114410 kubelet[3449]: W0117 12:17:32.114405 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.114635 kubelet[3449]: E0117 12:17:32.114440 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.114723 kubelet[3449]: E0117 12:17:32.114650 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.114723 kubelet[3449]: W0117 12:17:32.114662 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.114723 kubelet[3449]: E0117 12:17:32.114687 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.115042 kubelet[3449]: E0117 12:17:32.115021 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.115042 kubelet[3449]: W0117 12:17:32.115037 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.115185 kubelet[3449]: E0117 12:17:32.115067 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.115546 kubelet[3449]: E0117 12:17:32.115522 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.115546 kubelet[3449]: W0117 12:17:32.115539 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.115849 kubelet[3449]: E0117 12:17:32.115566 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.115849 kubelet[3449]: E0117 12:17:32.115843 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.116117 kubelet[3449]: W0117 12:17:32.115859 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.116117 kubelet[3449]: E0117 12:17:32.115909 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.116117 kubelet[3449]: E0117 12:17:32.116098 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.116117 kubelet[3449]: W0117 12:17:32.116109 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.116432 kubelet[3449]: E0117 12:17:32.116227 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.116432 kubelet[3449]: E0117 12:17:32.116426 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.116632 kubelet[3449]: W0117 12:17:32.116439 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.116632 kubelet[3449]: E0117 12:17:32.116463 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.116981 kubelet[3449]: E0117 12:17:32.116716 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.116981 kubelet[3449]: W0117 12:17:32.116729 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.116981 kubelet[3449]: E0117 12:17:32.116758 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.117374 kubelet[3449]: E0117 12:17:32.117353 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.117374 kubelet[3449]: W0117 12:17:32.117369 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.117522 kubelet[3449]: E0117 12:17:32.117436 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.117858 kubelet[3449]: E0117 12:17:32.117780 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.117858 kubelet[3449]: W0117 12:17:32.117797 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.117858 kubelet[3449]: E0117 12:17:32.117834 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.118428 kubelet[3449]: E0117 12:17:32.118264 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.118428 kubelet[3449]: W0117 12:17:32.118291 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.118428 kubelet[3449]: E0117 12:17:32.118316 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.118731 kubelet[3449]: E0117 12:17:32.118597 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:32.118731 kubelet[3449]: W0117 12:17:32.118609 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:32.118731 kubelet[3449]: E0117 12:17:32.118623 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:32.916047 kubelet[3449]: E0117 12:17:32.915808 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:32.957930 containerd[1782]: time="2025-01-17T12:17:32.957824167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:32.960571 containerd[1782]: time="2025-01-17T12:17:32.960350323Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 17 12:17:32.970752 containerd[1782]: time="2025-01-17T12:17:32.969937035Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:32.976964 containerd[1782]: time="2025-01-17T12:17:32.976729685Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:32.978236 containerd[1782]: time="2025-01-17T12:17:32.978200817Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.743242652s" Jan 17 12:17:32.978437 containerd[1782]: time="2025-01-17T12:17:32.978382221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 17 12:17:32.982313 containerd[1782]: time="2025-01-17T12:17:32.982275008Z" level=info msg="CreateContainer within sandbox \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 12:17:33.030994 kubelet[3449]: I0117 12:17:33.030954 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:17:33.035006 containerd[1782]: time="2025-01-17T12:17:33.034837070Z" level=info msg="CreateContainer within sandbox \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"54fb547ffed278af43933acb8a26233ad4053aa371f97c2ec1a5175f2fc317cf\"" Jan 17 12:17:33.035724 containerd[1782]: time="2025-01-17T12:17:33.035687889Z" level=info msg="StartContainer for \"54fb547ffed278af43933acb8a26233ad4053aa371f97c2ec1a5175f2fc317cf\"" Jan 17 12:17:33.096187 kubelet[3449]: E0117 12:17:33.095455 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.096187 kubelet[3449]: W0117 12:17:33.095481 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.096187 kubelet[3449]: E0117 12:17:33.095513 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.106845 kubelet[3449]: E0117 12:17:33.103624 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.106845 kubelet[3449]: W0117 12:17:33.103645 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.106845 kubelet[3449]: E0117 12:17:33.103672 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.107524 kubelet[3449]: E0117 12:17:33.104698 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.109606 kubelet[3449]: W0117 12:17:33.108739 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.109606 kubelet[3449]: E0117 12:17:33.108783 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.112781 kubelet[3449]: E0117 12:17:33.110305 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.112781 kubelet[3449]: W0117 12:17:33.110320 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.112781 kubelet[3449]: E0117 12:17:33.110342 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.113111 kubelet[3449]: E0117 12:17:33.113046 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.113674 kubelet[3449]: W0117 12:17:33.113566 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.113997 kubelet[3449]: E0117 12:17:33.113967 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.114475 kubelet[3449]: E0117 12:17:33.114347 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.114475 kubelet[3449]: W0117 12:17:33.114376 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.114475 kubelet[3449]: E0117 12:17:33.114396 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.114818 kubelet[3449]: E0117 12:17:33.114799 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.114818 kubelet[3449]: W0117 12:17:33.114817 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.115056 kubelet[3449]: E0117 12:17:33.114833 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.115118 kubelet[3449]: E0117 12:17:33.115108 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.115156 kubelet[3449]: W0117 12:17:33.115119 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.115156 kubelet[3449]: E0117 12:17:33.115136 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.115425 kubelet[3449]: E0117 12:17:33.115363 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.115425 kubelet[3449]: W0117 12:17:33.115376 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.115425 kubelet[3449]: E0117 12:17:33.115392 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.115727 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.117104 kubelet[3449]: W0117 12:17:33.115737 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.115807 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.116069 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.117104 kubelet[3449]: W0117 12:17:33.116079 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.116109 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.116339 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.117104 kubelet[3449]: W0117 12:17:33.116349 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.116384 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.117104 kubelet[3449]: E0117 12:17:33.116634 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.117491 kubelet[3449]: W0117 12:17:33.116659 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.117491 kubelet[3449]: E0117 12:17:33.116676 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.117491 kubelet[3449]: E0117 12:17:33.116934 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.117491 kubelet[3449]: W0117 12:17:33.116945 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.117491 kubelet[3449]: E0117 12:17:33.116961 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.117491 kubelet[3449]: E0117 12:17:33.117190 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.117491 kubelet[3449]: W0117 12:17:33.117200 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.117491 kubelet[3449]: E0117 12:17:33.117217 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.120419 kubelet[3449]: E0117 12:17:33.120402 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.120419 kubelet[3449]: W0117 12:17:33.120417 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.120555 kubelet[3449]: E0117 12:17:33.120435 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.121356 kubelet[3449]: E0117 12:17:33.120893 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.121356 kubelet[3449]: W0117 12:17:33.120907 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.121356 kubelet[3449]: E0117 12:17:33.121064 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.122621 kubelet[3449]: E0117 12:17:33.121594 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.122621 kubelet[3449]: W0117 12:17:33.121607 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.122621 kubelet[3449]: E0117 12:17:33.121628 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.122621 kubelet[3449]: E0117 12:17:33.122217 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.122621 kubelet[3449]: W0117 12:17:33.122230 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.122621 kubelet[3449]: E0117 12:17:33.122294 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.124882 kubelet[3449]: E0117 12:17:33.122888 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.124882 kubelet[3449]: W0117 12:17:33.122899 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.124882 kubelet[3449]: E0117 12:17:33.123201 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.124882 kubelet[3449]: E0117 12:17:33.123612 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.124882 kubelet[3449]: W0117 12:17:33.123624 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.124882 kubelet[3449]: E0117 12:17:33.123823 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.124882 kubelet[3449]: E0117 12:17:33.124316 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.124882 kubelet[3449]: W0117 12:17:33.124328 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.124882 kubelet[3449]: E0117 12:17:33.124594 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.126005 kubelet[3449]: E0117 12:17:33.124992 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.126005 kubelet[3449]: W0117 12:17:33.125002 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.126005 kubelet[3449]: E0117 12:17:33.125287 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.126005 kubelet[3449]: E0117 12:17:33.125931 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.126005 kubelet[3449]: W0117 12:17:33.125943 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.128424 kubelet[3449]: E0117 12:17:33.128315 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.128424 kubelet[3449]: E0117 12:17:33.128354 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.128424 kubelet[3449]: W0117 12:17:33.128364 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.128424 kubelet[3449]: E0117 12:17:33.128409 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.130088 kubelet[3449]: E0117 12:17:33.130060 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.130088 kubelet[3449]: W0117 12:17:33.130079 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.130218 kubelet[3449]: E0117 12:17:33.130109 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.134067 kubelet[3449]: E0117 12:17:33.134037 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.134067 kubelet[3449]: W0117 12:17:33.134057 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.134443 kubelet[3449]: E0117 12:17:33.134261 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.139814 kubelet[3449]: E0117 12:17:33.139556 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.139814 kubelet[3449]: W0117 12:17:33.139591 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.139814 kubelet[3449]: E0117 12:17:33.139610 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.145560 kubelet[3449]: E0117 12:17:33.145541 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.145700 kubelet[3449]: W0117 12:17:33.145684 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.146839 kubelet[3449]: E0117 12:17:33.145779 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.147937 kubelet[3449]: E0117 12:17:33.147859 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.147937 kubelet[3449]: W0117 12:17:33.147875 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.150579 kubelet[3449]: E0117 12:17:33.148407 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.150579 kubelet[3449]: E0117 12:17:33.149194 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.150579 kubelet[3449]: W0117 12:17:33.149206 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.150579 kubelet[3449]: E0117 12:17:33.149949 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.152837 kubelet[3449]: E0117 12:17:33.152742 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.152993 kubelet[3449]: W0117 12:17:33.152922 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.152993 kubelet[3449]: E0117 12:17:33.152957 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.155513 kubelet[3449]: E0117 12:17:33.155386 3449 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 12:17:33.155513 kubelet[3449]: W0117 12:17:33.155512 3449 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 12:17:33.157384 kubelet[3449]: E0117 12:17:33.155531 3449 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 12:17:33.158496 containerd[1782]: time="2025-01-17T12:17:33.158368502Z" level=info msg="StartContainer for \"54fb547ffed278af43933acb8a26233ad4053aa371f97c2ec1a5175f2fc317cf\" returns successfully" Jan 17 12:17:33.194925 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-54fb547ffed278af43933acb8a26233ad4053aa371f97c2ec1a5175f2fc317cf-rootfs.mount: Deactivated successfully. Jan 17 12:17:34.484105 containerd[1782]: time="2025-01-17T12:17:34.484027849Z" level=info msg="shim disconnected" id=54fb547ffed278af43933acb8a26233ad4053aa371f97c2ec1a5175f2fc317cf namespace=k8s.io Jan 17 12:17:34.484105 containerd[1782]: time="2025-01-17T12:17:34.484097550Z" level=warning msg="cleaning up after shim disconnected" id=54fb547ffed278af43933acb8a26233ad4053aa371f97c2ec1a5175f2fc317cf namespace=k8s.io Jan 17 12:17:34.484105 containerd[1782]: time="2025-01-17T12:17:34.484112050Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:17:34.496831 containerd[1782]: time="2025-01-17T12:17:34.496670729Z" level=warning msg="cleanup warnings time=\"2025-01-17T12:17:34Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 17 12:17:34.912485 kubelet[3449]: E0117 12:17:34.911052 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:35.039115 containerd[1782]: time="2025-01-17T12:17:35.039058241Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 17 12:17:36.912400 kubelet[3449]: E0117 12:17:36.911083 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:37.582699 kubelet[3449]: I0117 12:17:37.581945 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:17:38.912560 kubelet[3449]: E0117 12:17:38.912461 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:39.397423 containerd[1782]: time="2025-01-17T12:17:39.397358370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:39.399589 containerd[1782]: time="2025-01-17T12:17:39.399524418Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 17 12:17:39.403645 containerd[1782]: time="2025-01-17T12:17:39.403560707Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:39.408441 containerd[1782]: time="2025-01-17T12:17:39.408361214Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:39.409337 containerd[1782]: time="2025-01-17T12:17:39.409164531Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.370046189s" Jan 17 12:17:39.409337 containerd[1782]: time="2025-01-17T12:17:39.409212133Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 17 12:17:39.411661 containerd[1782]: time="2025-01-17T12:17:39.411621386Z" level=info msg="CreateContainer within sandbox \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 12:17:39.447217 containerd[1782]: time="2025-01-17T12:17:39.447163073Z" level=info msg="CreateContainer within sandbox \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c\"" Jan 17 12:17:39.449797 containerd[1782]: time="2025-01-17T12:17:39.448195996Z" level=info msg="StartContainer for \"23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c\"" Jan 17 12:17:39.489846 systemd[1]: run-containerd-runc-k8s.io-23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c-runc.oHTzEl.mount: Deactivated successfully. Jan 17 12:17:39.524487 containerd[1782]: time="2025-01-17T12:17:39.524334482Z" level=info msg="StartContainer for \"23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c\" returns successfully" Jan 17 12:17:40.912041 kubelet[3449]: E0117 12:17:40.911191 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:40.999214 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c-rootfs.mount: Deactivated successfully. Jan 17 12:17:41.037412 kubelet[3449]: I0117 12:17:41.037374 3449 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 17 12:17:41.081659 kubelet[3449]: I0117 12:17:41.080732 3449 topology_manager.go:215] "Topology Admit Handler" podUID="d001a922-c53b-4b28-b857-cb021efc482d" podNamespace="kube-system" podName="coredns-76f75df574-5w5bs" Jan 17 12:17:41.084652 kubelet[3449]: I0117 12:17:41.084350 3449 topology_manager.go:215] "Topology Admit Handler" podUID="041daf5e-35eb-4040-afb1-513c992a1e08" podNamespace="kube-system" podName="coredns-76f75df574-qj8p7" Jan 17 12:17:41.094380 kubelet[3449]: I0117 12:17:41.093379 3449 topology_manager.go:215] "Topology Admit Handler" podUID="10fd51e2-809c-4e12-8a73-1d0eb211a996" podNamespace="calico-system" podName="calico-kube-controllers-5c8fcdbb86-bxf5p" Jan 17 12:17:41.094380 kubelet[3449]: I0117 12:17:41.093833 3449 topology_manager.go:215] "Topology Admit Handler" podUID="f7589a2f-6276-44ab-9ea6-20277e8e0375" podNamespace="calico-apiserver" podName="calico-apiserver-6c9d78bb49-bprl2" Jan 17 12:17:41.098993 containerd[1782]: time="2025-01-17T12:17:41.098850255Z" level=error msg="collecting metrics for 23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c" error="cgroups: cgroup deleted: unknown" Jan 17 12:17:41.109737 kubelet[3449]: I0117 12:17:41.108987 3449 topology_manager.go:215] "Topology Admit Handler" podUID="24682ad7-0945-4efc-b49b-c11c02b2d640" podNamespace="calico-apiserver" podName="calico-apiserver-6c9d78bb49-9nb88" Jan 17 12:17:41.123611 kubelet[3449]: E0117 12:17:41.123580 3449 cadvisor_stats_provider.go:501] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods/besteffort/podf7589a2f-6276-44ab-9ea6-20277e8e0375\": RecentStats: unable to find data in memory cache]" Jan 17 12:17:41.183572 kubelet[3449]: I0117 12:17:41.183435 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/10fd51e2-809c-4e12-8a73-1d0eb211a996-tigera-ca-bundle\") pod \"calico-kube-controllers-5c8fcdbb86-bxf5p\" (UID: \"10fd51e2-809c-4e12-8a73-1d0eb211a996\") " pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" Jan 17 12:17:41.183572 kubelet[3449]: I0117 12:17:41.183490 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mdhnp\" (UniqueName: \"kubernetes.io/projected/10fd51e2-809c-4e12-8a73-1d0eb211a996-kube-api-access-mdhnp\") pod \"calico-kube-controllers-5c8fcdbb86-bxf5p\" (UID: \"10fd51e2-809c-4e12-8a73-1d0eb211a996\") " pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" Jan 17 12:17:41.183572 kubelet[3449]: I0117 12:17:41.183527 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45djz\" (UniqueName: \"kubernetes.io/projected/f7589a2f-6276-44ab-9ea6-20277e8e0375-kube-api-access-45djz\") pod \"calico-apiserver-6c9d78bb49-bprl2\" (UID: \"f7589a2f-6276-44ab-9ea6-20277e8e0375\") " pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" Jan 17 12:17:41.183572 kubelet[3449]: I0117 12:17:41.183563 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/24682ad7-0945-4efc-b49b-c11c02b2d640-calico-apiserver-certs\") pod \"calico-apiserver-6c9d78bb49-9nb88\" (UID: \"24682ad7-0945-4efc-b49b-c11c02b2d640\") " pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" Jan 17 12:17:41.183905 kubelet[3449]: I0117 12:17:41.183604 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mx6tw\" (UniqueName: \"kubernetes.io/projected/24682ad7-0945-4efc-b49b-c11c02b2d640-kube-api-access-mx6tw\") pod \"calico-apiserver-6c9d78bb49-9nb88\" (UID: \"24682ad7-0945-4efc-b49b-c11c02b2d640\") " pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" Jan 17 12:17:41.183905 kubelet[3449]: I0117 12:17:41.183638 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/041daf5e-35eb-4040-afb1-513c992a1e08-config-volume\") pod \"coredns-76f75df574-qj8p7\" (UID: \"041daf5e-35eb-4040-afb1-513c992a1e08\") " pod="kube-system/coredns-76f75df574-qj8p7" Jan 17 12:17:41.183905 kubelet[3449]: I0117 12:17:41.183664 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5crr\" (UniqueName: \"kubernetes.io/projected/d001a922-c53b-4b28-b857-cb021efc482d-kube-api-access-v5crr\") pod \"coredns-76f75df574-5w5bs\" (UID: \"d001a922-c53b-4b28-b857-cb021efc482d\") " pod="kube-system/coredns-76f75df574-5w5bs" Jan 17 12:17:41.183905 kubelet[3449]: I0117 12:17:41.183689 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f7589a2f-6276-44ab-9ea6-20277e8e0375-calico-apiserver-certs\") pod \"calico-apiserver-6c9d78bb49-bprl2\" (UID: \"f7589a2f-6276-44ab-9ea6-20277e8e0375\") " pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" Jan 17 12:17:41.183905 kubelet[3449]: I0117 12:17:41.183716 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d001a922-c53b-4b28-b857-cb021efc482d-config-volume\") pod \"coredns-76f75df574-5w5bs\" (UID: \"d001a922-c53b-4b28-b857-cb021efc482d\") " pod="kube-system/coredns-76f75df574-5w5bs" Jan 17 12:17:41.184101 kubelet[3449]: I0117 12:17:41.183740 3449 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwkrf\" (UniqueName: \"kubernetes.io/projected/041daf5e-35eb-4040-afb1-513c992a1e08-kube-api-access-dwkrf\") pod \"coredns-76f75df574-qj8p7\" (UID: \"041daf5e-35eb-4040-afb1-513c992a1e08\") " pod="kube-system/coredns-76f75df574-qj8p7" Jan 17 12:17:41.400081 containerd[1782]: time="2025-01-17T12:17:41.399549315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qj8p7,Uid:041daf5e-35eb-4040-afb1-513c992a1e08,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:41.403888 containerd[1782]: time="2025-01-17T12:17:41.403824210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5w5bs,Uid:d001a922-c53b-4b28-b857-cb021efc482d,Namespace:kube-system,Attempt:0,}" Jan 17 12:17:41.407663 containerd[1782]: time="2025-01-17T12:17:41.407620194Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-bprl2,Uid:f7589a2f-6276-44ab-9ea6-20277e8e0375,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:17:41.415319 containerd[1782]: time="2025-01-17T12:17:41.415284763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-9nb88,Uid:24682ad7-0945-4efc-b49b-c11c02b2d640,Namespace:calico-apiserver,Attempt:0,}" Jan 17 12:17:41.416806 containerd[1782]: time="2025-01-17T12:17:41.416778797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c8fcdbb86-bxf5p,Uid:10fd51e2-809c-4e12-8a73-1d0eb211a996,Namespace:calico-system,Attempt:0,}" Jan 17 12:17:42.630402 containerd[1782]: time="2025-01-17T12:17:42.630332377Z" level=info msg="shim disconnected" id=23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c namespace=k8s.io Jan 17 12:17:42.630402 containerd[1782]: time="2025-01-17T12:17:42.630400079Z" level=warning msg="cleaning up after shim disconnected" id=23b85a6ebdef8cb083cce14769202b9951bc394198864aa4ed2ba292e419df0c namespace=k8s.io Jan 17 12:17:42.630402 containerd[1782]: time="2025-01-17T12:17:42.630411579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 12:17:42.907432 containerd[1782]: time="2025-01-17T12:17:42.907235257Z" level=error msg="Failed to destroy network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.907914 containerd[1782]: time="2025-01-17T12:17:42.907758068Z" level=error msg="encountered an error cleaning up failed sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.907914 containerd[1782]: time="2025-01-17T12:17:42.907855071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5w5bs,Uid:d001a922-c53b-4b28-b857-cb021efc482d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.908573 kubelet[3449]: E0117 12:17:42.908254 3449 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.908573 kubelet[3449]: E0117 12:17:42.908496 3449 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5w5bs" Jan 17 12:17:42.908573 kubelet[3449]: E0117 12:17:42.908533 3449 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-5w5bs" Jan 17 12:17:42.912792 kubelet[3449]: E0117 12:17:42.909564 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-5w5bs_kube-system(d001a922-c53b-4b28-b857-cb021efc482d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-5w5bs_kube-system(d001a922-c53b-4b28-b857-cb021efc482d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5w5bs" podUID="d001a922-c53b-4b28-b857-cb021efc482d" Jan 17 12:17:42.921540 containerd[1782]: time="2025-01-17T12:17:42.921484570Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8wmfr,Uid:a3abf411-90ba-45ad-b3b8-494831f9b2d4,Namespace:calico-system,Attempt:0,}" Jan 17 12:17:42.961757 containerd[1782]: time="2025-01-17T12:17:42.961692753Z" level=error msg="Failed to destroy network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.962084 containerd[1782]: time="2025-01-17T12:17:42.962044160Z" level=error msg="encountered an error cleaning up failed sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.962283 containerd[1782]: time="2025-01-17T12:17:42.962115262Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-bprl2,Uid:f7589a2f-6276-44ab-9ea6-20277e8e0375,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.962418 kubelet[3449]: E0117 12:17:42.962386 3449 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.962482 kubelet[3449]: E0117 12:17:42.962451 3449 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" Jan 17 12:17:42.962528 kubelet[3449]: E0117 12:17:42.962485 3449 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" Jan 17 12:17:42.962861 kubelet[3449]: E0117 12:17:42.962569 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c9d78bb49-bprl2_calico-apiserver(f7589a2f-6276-44ab-9ea6-20277e8e0375)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c9d78bb49-bprl2_calico-apiserver(f7589a2f-6276-44ab-9ea6-20277e8e0375)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" podUID="f7589a2f-6276-44ab-9ea6-20277e8e0375" Jan 17 12:17:42.978453 containerd[1782]: time="2025-01-17T12:17:42.978220116Z" level=error msg="Failed to destroy network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.981100 containerd[1782]: time="2025-01-17T12:17:42.980754671Z" level=error msg="encountered an error cleaning up failed sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.981540 containerd[1782]: time="2025-01-17T12:17:42.981403685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c8fcdbb86-bxf5p,Uid:10fd51e2-809c-4e12-8a73-1d0eb211a996,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.982586 kubelet[3449]: E0117 12:17:42.982432 3449 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:42.982586 kubelet[3449]: E0117 12:17:42.982514 3449 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" Jan 17 12:17:42.982586 kubelet[3449]: E0117 12:17:42.982544 3449 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" Jan 17 12:17:42.983103 kubelet[3449]: E0117 12:17:42.982624 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5c8fcdbb86-bxf5p_calico-system(10fd51e2-809c-4e12-8a73-1d0eb211a996)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5c8fcdbb86-bxf5p_calico-system(10fd51e2-809c-4e12-8a73-1d0eb211a996)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" podUID="10fd51e2-809c-4e12-8a73-1d0eb211a996" Jan 17 12:17:43.006469 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9-shm.mount: Deactivated successfully. Jan 17 12:17:43.006671 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec-shm.mount: Deactivated successfully. Jan 17 12:17:43.007261 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2-shm.mount: Deactivated successfully. Jan 17 12:17:43.014501 containerd[1782]: time="2025-01-17T12:17:43.014416210Z" level=error msg="Failed to destroy network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.017091 containerd[1782]: time="2025-01-17T12:17:43.016984667Z" level=error msg="encountered an error cleaning up failed sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.017091 containerd[1782]: time="2025-01-17T12:17:43.017055668Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-9nb88,Uid:24682ad7-0945-4efc-b49b-c11c02b2d640,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.018063 kubelet[3449]: E0117 12:17:43.017352 3449 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.018063 kubelet[3449]: E0117 12:17:43.017419 3449 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" Jan 17 12:17:43.018063 kubelet[3449]: E0117 12:17:43.017465 3449 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" Jan 17 12:17:43.018223 kubelet[3449]: E0117 12:17:43.017545 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c9d78bb49-9nb88_calico-apiserver(24682ad7-0945-4efc-b49b-c11c02b2d640)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c9d78bb49-9nb88_calico-apiserver(24682ad7-0945-4efc-b49b-c11c02b2d640)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" podUID="24682ad7-0945-4efc-b49b-c11c02b2d640" Jan 17 12:17:43.019193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33-shm.mount: Deactivated successfully. Jan 17 12:17:43.021916 containerd[1782]: time="2025-01-17T12:17:43.021642669Z" level=error msg="Failed to destroy network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.026111 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf-shm.mount: Deactivated successfully. Jan 17 12:17:43.028315 containerd[1782]: time="2025-01-17T12:17:43.028133511Z" level=error msg="encountered an error cleaning up failed sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.028315 containerd[1782]: time="2025-01-17T12:17:43.028246814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qj8p7,Uid:041daf5e-35eb-4040-afb1-513c992a1e08,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.028604 kubelet[3449]: E0117 12:17:43.028507 3449 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.028604 kubelet[3449]: E0117 12:17:43.028574 3449 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qj8p7" Jan 17 12:17:43.028604 kubelet[3449]: E0117 12:17:43.028605 3449 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-qj8p7" Jan 17 12:17:43.028785 kubelet[3449]: E0117 12:17:43.028682 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-qj8p7_kube-system(041daf5e-35eb-4040-afb1-513c992a1e08)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-qj8p7_kube-system(041daf5e-35eb-4040-afb1-513c992a1e08)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qj8p7" podUID="041daf5e-35eb-4040-afb1-513c992a1e08" Jan 17 12:17:43.065326 kubelet[3449]: I0117 12:17:43.065131 3449 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:17:43.067780 containerd[1782]: time="2025-01-17T12:17:43.067497576Z" level=info msg="StopPodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\"" Jan 17 12:17:43.069472 kubelet[3449]: I0117 12:17:43.069447 3449 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:17:43.070446 containerd[1782]: time="2025-01-17T12:17:43.069947929Z" level=info msg="Ensure that sandbox da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33 in task-service has been cleanup successfully" Jan 17 12:17:43.070446 containerd[1782]: time="2025-01-17T12:17:43.070025231Z" level=info msg="StopPodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\"" Jan 17 12:17:43.070446 containerd[1782]: time="2025-01-17T12:17:43.070203935Z" level=info msg="Ensure that sandbox 03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2 in task-service has been cleanup successfully" Jan 17 12:17:43.074228 containerd[1782]: time="2025-01-17T12:17:43.074077020Z" level=error msg="Failed to destroy network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.075779 containerd[1782]: time="2025-01-17T12:17:43.075647155Z" level=error msg="encountered an error cleaning up failed sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.076542 containerd[1782]: time="2025-01-17T12:17:43.076503373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8wmfr,Uid:a3abf411-90ba-45ad-b3b8-494831f9b2d4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.079573 kubelet[3449]: I0117 12:17:43.077047 3449 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:17:43.079227 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8-shm.mount: Deactivated successfully. Jan 17 12:17:43.080385 containerd[1782]: time="2025-01-17T12:17:43.079916348Z" level=info msg="StopPodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\"" Jan 17 12:17:43.080385 containerd[1782]: time="2025-01-17T12:17:43.080112953Z" level=info msg="Ensure that sandbox c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec in task-service has been cleanup successfully" Jan 17 12:17:43.080682 kubelet[3449]: E0117 12:17:43.077701 3449 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.080682 kubelet[3449]: E0117 12:17:43.080565 3449 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:43.080682 kubelet[3449]: E0117 12:17:43.080594 3449 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-8wmfr" Jan 17 12:17:43.080933 kubelet[3449]: E0117 12:17:43.080654 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-8wmfr_calico-system(a3abf411-90ba-45ad-b3b8-494831f9b2d4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-8wmfr_calico-system(a3abf411-90ba-45ad-b3b8-494831f9b2d4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:43.086549 kubelet[3449]: I0117 12:17:43.085826 3449 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:17:43.087161 containerd[1782]: time="2025-01-17T12:17:43.087127707Z" level=info msg="StopPodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\"" Jan 17 12:17:43.087864 containerd[1782]: time="2025-01-17T12:17:43.087836822Z" level=info msg="Ensure that sandbox ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9 in task-service has been cleanup successfully" Jan 17 12:17:43.088743 kubelet[3449]: I0117 12:17:43.088723 3449 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:17:43.089395 containerd[1782]: time="2025-01-17T12:17:43.089367756Z" level=info msg="StopPodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\"" Jan 17 12:17:43.089908 containerd[1782]: time="2025-01-17T12:17:43.089647462Z" level=info msg="Ensure that sandbox 7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf in task-service has been cleanup successfully" Jan 17 12:17:43.104675 containerd[1782]: time="2025-01-17T12:17:43.104643091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 17 12:17:43.194835 containerd[1782]: time="2025-01-17T12:17:43.194657868Z" level=error msg="StopPodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" failed" error="failed to destroy network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.195579 kubelet[3449]: E0117 12:17:43.195306 3449 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:17:43.195579 kubelet[3449]: E0117 12:17:43.195416 3449 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec"} Jan 17 12:17:43.195579 kubelet[3449]: E0117 12:17:43.195470 3449 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"10fd51e2-809c-4e12-8a73-1d0eb211a996\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:17:43.195579 kubelet[3449]: E0117 12:17:43.195551 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"10fd51e2-809c-4e12-8a73-1d0eb211a996\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" podUID="10fd51e2-809c-4e12-8a73-1d0eb211a996" Jan 17 12:17:43.204913 containerd[1782]: time="2025-01-17T12:17:43.204850591Z" level=error msg="StopPodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" failed" error="failed to destroy network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.205429 kubelet[3449]: E0117 12:17:43.205172 3449 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:17:43.205429 kubelet[3449]: E0117 12:17:43.205229 3449 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf"} Jan 17 12:17:43.205429 kubelet[3449]: E0117 12:17:43.205275 3449 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"041daf5e-35eb-4040-afb1-513c992a1e08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:17:43.205429 kubelet[3449]: E0117 12:17:43.205315 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"041daf5e-35eb-4040-afb1-513c992a1e08\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-qj8p7" podUID="041daf5e-35eb-4040-afb1-513c992a1e08" Jan 17 12:17:43.209786 containerd[1782]: time="2025-01-17T12:17:43.208946281Z" level=error msg="StopPodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" failed" error="failed to destroy network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.210241 kubelet[3449]: E0117 12:17:43.210058 3449 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:17:43.210241 kubelet[3449]: E0117 12:17:43.210109 3449 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2"} Jan 17 12:17:43.210241 kubelet[3449]: E0117 12:17:43.210165 3449 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d001a922-c53b-4b28-b857-cb021efc482d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:17:43.210241 kubelet[3449]: E0117 12:17:43.210201 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d001a922-c53b-4b28-b857-cb021efc482d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-5w5bs" podUID="d001a922-c53b-4b28-b857-cb021efc482d" Jan 17 12:17:43.212155 containerd[1782]: time="2025-01-17T12:17:43.212117151Z" level=error msg="StopPodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" failed" error="failed to destroy network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.212524 kubelet[3449]: E0117 12:17:43.212508 3449 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:17:43.212662 kubelet[3449]: E0117 12:17:43.212651 3449 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9"} Jan 17 12:17:43.212818 kubelet[3449]: E0117 12:17:43.212803 3449 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7589a2f-6276-44ab-9ea6-20277e8e0375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:17:43.213002 kubelet[3449]: E0117 12:17:43.212981 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7589a2f-6276-44ab-9ea6-20277e8e0375\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" podUID="f7589a2f-6276-44ab-9ea6-20277e8e0375" Jan 17 12:17:43.213292 containerd[1782]: time="2025-01-17T12:17:43.213255476Z" level=error msg="StopPodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" failed" error="failed to destroy network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:43.213442 kubelet[3449]: E0117 12:17:43.213423 3449 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:17:43.213516 kubelet[3449]: E0117 12:17:43.213454 3449 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33"} Jan 17 12:17:43.213516 kubelet[3449]: E0117 12:17:43.213495 3449 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"24682ad7-0945-4efc-b49b-c11c02b2d640\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:17:43.213665 kubelet[3449]: E0117 12:17:43.213531 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"24682ad7-0945-4efc-b49b-c11c02b2d640\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" podUID="24682ad7-0945-4efc-b49b-c11c02b2d640" Jan 17 12:17:44.103711 kubelet[3449]: I0117 12:17:44.103152 3449 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:17:44.105288 containerd[1782]: time="2025-01-17T12:17:44.104581146Z" level=info msg="StopPodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\"" Jan 17 12:17:44.105288 containerd[1782]: time="2025-01-17T12:17:44.104880552Z" level=info msg="Ensure that sandbox 74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8 in task-service has been cleanup successfully" Jan 17 12:17:44.133649 containerd[1782]: time="2025-01-17T12:17:44.133470780Z" level=error msg="StopPodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" failed" error="failed to destroy network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 12:17:44.133838 kubelet[3449]: E0117 12:17:44.133789 3449 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:17:44.134038 kubelet[3449]: E0117 12:17:44.133849 3449 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8"} Jan 17 12:17:44.134038 kubelet[3449]: E0117 12:17:44.133899 3449 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 12:17:44.134038 kubelet[3449]: E0117 12:17:44.133943 3449 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a3abf411-90ba-45ad-b3b8-494831f9b2d4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-8wmfr" podUID="a3abf411-90ba-45ad-b3b8-494831f9b2d4" Jan 17 12:17:48.897993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1619434760.mount: Deactivated successfully. Jan 17 12:17:48.953935 containerd[1782]: time="2025-01-17T12:17:48.953871816Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:48.956374 containerd[1782]: time="2025-01-17T12:17:48.956305570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 17 12:17:48.958898 containerd[1782]: time="2025-01-17T12:17:48.958840325Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:48.963104 containerd[1782]: time="2025-01-17T12:17:48.963050518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:48.963886 containerd[1782]: time="2025-01-17T12:17:48.963706232Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.857250101s" Jan 17 12:17:48.963886 containerd[1782]: time="2025-01-17T12:17:48.963749633Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 17 12:17:48.976293 containerd[1782]: time="2025-01-17T12:17:48.976257008Z" level=info msg="CreateContainer within sandbox \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 12:17:49.033275 containerd[1782]: time="2025-01-17T12:17:49.033229359Z" level=info msg="CreateContainer within sandbox \"227041439e52f77623dc672a01b845de6b4f61f0dd69287fb4935ccb0b2502d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0311b95879c2a55cfad190fe7892af301e8b0bfc5018560ed97dc8b8ce95c639\"" Jan 17 12:17:49.033835 containerd[1782]: time="2025-01-17T12:17:49.033805771Z" level=info msg="StartContainer for \"0311b95879c2a55cfad190fe7892af301e8b0bfc5018560ed97dc8b8ce95c639\"" Jan 17 12:17:49.092632 containerd[1782]: time="2025-01-17T12:17:49.092491860Z" level=info msg="StartContainer for \"0311b95879c2a55cfad190fe7892af301e8b0bfc5018560ed97dc8b8ce95c639\" returns successfully" Jan 17 12:17:49.156707 kubelet[3449]: I0117 12:17:49.155021 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-qctfl" podStartSLOduration=1.4759873350000001 podStartE2EDuration="21.154937831s" podCreationTimestamp="2025-01-17 12:17:28 +0000 UTC" firstStartedPulling="2025-01-17 12:17:29.285311248 +0000 UTC m=+28.495913086" lastFinishedPulling="2025-01-17 12:17:48.964261744 +0000 UTC m=+48.174863582" observedRunningTime="2025-01-17 12:17:49.148110881 +0000 UTC m=+48.358712719" watchObservedRunningTime="2025-01-17 12:17:49.154937831 +0000 UTC m=+48.365539769" Jan 17 12:17:49.412732 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 12:17:49.413026 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 12:17:50.995849 kernel: bpftool[4733]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 12:17:51.295060 systemd-networkd[1362]: vxlan.calico: Link UP Jan 17 12:17:51.295070 systemd-networkd[1362]: vxlan.calico: Gained carrier Jan 17 12:17:52.656037 systemd-networkd[1362]: vxlan.calico: Gained IPv6LL Jan 17 12:17:53.912975 containerd[1782]: time="2025-01-17T12:17:53.911424592Z" level=info msg="StopPodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\"" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.969 [INFO][4818] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.971 [INFO][4818] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" iface="eth0" netns="/var/run/netns/cni-2b8eeab2-b621-5be4-7acb-35851178781f" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.971 [INFO][4818] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" iface="eth0" netns="/var/run/netns/cni-2b8eeab2-b621-5be4-7acb-35851178781f" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.971 [INFO][4818] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" iface="eth0" netns="/var/run/netns/cni-2b8eeab2-b621-5be4-7acb-35851178781f" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.971 [INFO][4818] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.971 [INFO][4818] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.993 [INFO][4824] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.993 [INFO][4824] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:53.994 [INFO][4824] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:54.000 [WARNING][4824] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:54.000 [INFO][4824] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:54.001 [INFO][4824] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:54.005337 containerd[1782]: 2025-01-17 12:17:54.004 [INFO][4818] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:17:54.008266 containerd[1782]: time="2025-01-17T12:17:54.007945717Z" level=info msg="TearDown network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" successfully" Jan 17 12:17:54.008266 containerd[1782]: time="2025-01-17T12:17:54.007997918Z" level=info msg="StopPodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" returns successfully" Jan 17 12:17:54.009582 containerd[1782]: time="2025-01-17T12:17:54.009536952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-9nb88,Uid:24682ad7-0945-4efc-b49b-c11c02b2d640,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:17:54.011706 systemd[1]: run-netns-cni\x2d2b8eeab2\x2db621\x2d5be4\x2d7acb\x2d35851178781f.mount: Deactivated successfully. Jan 17 12:17:54.160374 systemd-networkd[1362]: calib526747e8d7: Link UP Jan 17 12:17:54.160912 systemd-networkd[1362]: calib526747e8d7: Gained carrier Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.091 [INFO][4830] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0 calico-apiserver-6c9d78bb49- calico-apiserver 24682ad7-0945-4efc-b49b-c11c02b2d640 766 0 2025-01-17 12:17:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c9d78bb49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-bcafed7e46 calico-apiserver-6c9d78bb49-9nb88 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib526747e8d7 [] []}} ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.091 [INFO][4830] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.119 [INFO][4842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" HandleID="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.128 [INFO][4842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" HandleID="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-bcafed7e46", "pod":"calico-apiserver-6c9d78bb49-9nb88", "timestamp":"2025-01-17 12:17:54.119674276 +0000 UTC"}, Hostname:"ci-4081.3.0-a-bcafed7e46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.129 [INFO][4842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.129 [INFO][4842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.129 [INFO][4842] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-bcafed7e46' Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.130 [INFO][4842] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.134 [INFO][4842] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.137 [INFO][4842] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.139 [INFO][4842] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.140 [INFO][4842] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.140 [INFO][4842] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.142 [INFO][4842] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14 Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.146 [INFO][4842] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.153 [INFO][4842] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.129/26] block=192.168.114.128/26 handle="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.153 [INFO][4842] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.129/26] handle="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.154 [INFO][4842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:54.183958 containerd[1782]: 2025-01-17 12:17:54.154 [INFO][4842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.129/26] IPv6=[] ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" HandleID="k8s-pod-network.1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.185839 containerd[1782]: 2025-01-17 12:17:54.155 [INFO][4830] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"24682ad7-0945-4efc-b49b-c11c02b2d640", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"", Pod:"calico-apiserver-6c9d78bb49-9nb88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib526747e8d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:54.185839 containerd[1782]: 2025-01-17 12:17:54.156 [INFO][4830] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.129/32] ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.185839 containerd[1782]: 2025-01-17 12:17:54.156 [INFO][4830] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib526747e8d7 ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.185839 containerd[1782]: 2025-01-17 12:17:54.160 [INFO][4830] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.185839 containerd[1782]: 2025-01-17 12:17:54.160 [INFO][4830] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"24682ad7-0945-4efc-b49b-c11c02b2d640", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14", Pod:"calico-apiserver-6c9d78bb49-9nb88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib526747e8d7", MAC:"0e:c8:be:ff:56:ef", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:54.185839 containerd[1782]: 2025-01-17 12:17:54.180 [INFO][4830] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-9nb88" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:17:54.213902 containerd[1782]: time="2025-01-17T12:17:54.213607143Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:54.213902 containerd[1782]: time="2025-01-17T12:17:54.213809147Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:54.213902 containerd[1782]: time="2025-01-17T12:17:54.213836548Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:54.215803 containerd[1782]: time="2025-01-17T12:17:54.214866271Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:54.278590 containerd[1782]: time="2025-01-17T12:17:54.278546372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-9nb88,Uid:24682ad7-0945-4efc-b49b-c11c02b2d640,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14\"" Jan 17 12:17:54.281105 containerd[1782]: time="2025-01-17T12:17:54.280590817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 17 12:17:56.047913 systemd-networkd[1362]: calib526747e8d7: Gained IPv6LL Jan 17 12:17:56.634250 containerd[1782]: time="2025-01-17T12:17:56.634194118Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.637338 containerd[1782]: time="2025-01-17T12:17:56.637273586Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 17 12:17:56.642197 containerd[1782]: time="2025-01-17T12:17:56.642136393Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.647017 containerd[1782]: time="2025-01-17T12:17:56.646962199Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:17:56.648220 containerd[1782]: time="2025-01-17T12:17:56.647636214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.366998195s" Jan 17 12:17:56.648220 containerd[1782]: time="2025-01-17T12:17:56.647675514Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 17 12:17:56.649706 containerd[1782]: time="2025-01-17T12:17:56.649677158Z" level=info msg="CreateContainer within sandbox \"1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:17:56.683795 containerd[1782]: time="2025-01-17T12:17:56.683363600Z" level=info msg="CreateContainer within sandbox \"1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4f21aa7e822958ac7898dfb0465ab843fc07d24f6c22a442a840b3e70448ed4a\"" Jan 17 12:17:56.684336 containerd[1782]: time="2025-01-17T12:17:56.684298820Z" level=info msg="StartContainer for \"4f21aa7e822958ac7898dfb0465ab843fc07d24f6c22a442a840b3e70448ed4a\"" Jan 17 12:17:56.768624 containerd[1782]: time="2025-01-17T12:17:56.768530474Z" level=info msg="StartContainer for \"4f21aa7e822958ac7898dfb0465ab843fc07d24f6c22a442a840b3e70448ed4a\" returns successfully" Jan 17 12:17:56.917443 containerd[1782]: time="2025-01-17T12:17:56.916953541Z" level=info msg="StopPodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\"" Jan 17 12:17:56.917609 containerd[1782]: time="2025-01-17T12:17:56.917527954Z" level=info msg="StopPodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\"" Jan 17 12:17:56.921791 containerd[1782]: time="2025-01-17T12:17:56.920912728Z" level=info msg="StopPodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\"" Jan 17 12:17:56.923690 containerd[1782]: time="2025-01-17T12:17:56.923654688Z" level=info msg="StopPodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\"" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.062 [INFO][5006] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.063 [INFO][5006] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" iface="eth0" netns="/var/run/netns/cni-a155bb7c-a582-cf81-8343-f09e5b8b665f" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.064 [INFO][5006] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" iface="eth0" netns="/var/run/netns/cni-a155bb7c-a582-cf81-8343-f09e5b8b665f" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.064 [INFO][5006] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" iface="eth0" netns="/var/run/netns/cni-a155bb7c-a582-cf81-8343-f09e5b8b665f" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.065 [INFO][5006] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.065 [INFO][5006] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.249 [INFO][5023] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.250 [INFO][5023] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.250 [INFO][5023] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.259 [WARNING][5023] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.259 [INFO][5023] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.263 [INFO][5023] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:57.273874 containerd[1782]: 2025-01-17 12:17:57.268 [INFO][5006] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:17:57.279513 systemd[1]: run-netns-cni\x2da155bb7c\x2da582\x2dcf81\x2d8343\x2df09e5b8b665f.mount: Deactivated successfully. Jan 17 12:17:57.285605 containerd[1782]: time="2025-01-17T12:17:57.283859216Z" level=info msg="TearDown network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" successfully" Jan 17 12:17:57.285605 containerd[1782]: time="2025-01-17T12:17:57.283910217Z" level=info msg="StopPodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" returns successfully" Jan 17 12:17:57.285605 containerd[1782]: time="2025-01-17T12:17:57.284929540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5w5bs,Uid:d001a922-c53b-4b28-b857-cb021efc482d,Namespace:kube-system,Attempt:1,}" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.075 [INFO][4988] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.076 [INFO][4988] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" iface="eth0" netns="/var/run/netns/cni-b152e59b-08b4-2a99-caed-5b5747d884ae" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.077 [INFO][4988] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" iface="eth0" netns="/var/run/netns/cni-b152e59b-08b4-2a99-caed-5b5747d884ae" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.080 [INFO][4988] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" iface="eth0" netns="/var/run/netns/cni-b152e59b-08b4-2a99-caed-5b5747d884ae" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.080 [INFO][4988] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.080 [INFO][4988] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.248 [INFO][5025] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.253 [INFO][5025] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.261 [INFO][5025] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.285 [WARNING][5025] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.285 [INFO][5025] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.288 [INFO][5025] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:57.299722 containerd[1782]: 2025-01-17 12:17:57.292 [INFO][4988] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:17:57.301224 containerd[1782]: time="2025-01-17T12:17:57.299988871Z" level=info msg="TearDown network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" successfully" Jan 17 12:17:57.301224 containerd[1782]: time="2025-01-17T12:17:57.300025672Z" level=info msg="StopPodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" returns successfully" Jan 17 12:17:57.301224 containerd[1782]: time="2025-01-17T12:17:57.300747488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qj8p7,Uid:041daf5e-35eb-4040-afb1-513c992a1e08,Namespace:kube-system,Attempt:1,}" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.149 [INFO][5008] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.149 [INFO][5008] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" iface="eth0" netns="/var/run/netns/cni-e380c10a-072b-700c-172a-ca7402d81d8f" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.149 [INFO][5008] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" iface="eth0" netns="/var/run/netns/cni-e380c10a-072b-700c-172a-ca7402d81d8f" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.160 [INFO][5008] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" iface="eth0" netns="/var/run/netns/cni-e380c10a-072b-700c-172a-ca7402d81d8f" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.160 [INFO][5008] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.160 [INFO][5008] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.310 [INFO][5036] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.311 [INFO][5036] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.311 [INFO][5036] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.321 [WARNING][5036] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.321 [INFO][5036] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.326 [INFO][5036] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:57.330901 containerd[1782]: 2025-01-17 12:17:57.328 [INFO][5008] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:17:57.331853 containerd[1782]: time="2025-01-17T12:17:57.331586267Z" level=info msg="TearDown network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" successfully" Jan 17 12:17:57.331853 containerd[1782]: time="2025-01-17T12:17:57.331626068Z" level=info msg="StopPodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" returns successfully" Jan 17 12:17:57.333856 containerd[1782]: time="2025-01-17T12:17:57.333521109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c8fcdbb86-bxf5p,Uid:10fd51e2-809c-4e12-8a73-1d0eb211a996,Namespace:calico-system,Attempt:1,}" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.148 [INFO][5007] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.153 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" iface="eth0" netns="/var/run/netns/cni-788d5690-29bd-f81e-00f4-77382a2b7345" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.155 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" iface="eth0" netns="/var/run/netns/cni-788d5690-29bd-f81e-00f4-77382a2b7345" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.155 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" iface="eth0" netns="/var/run/netns/cni-788d5690-29bd-f81e-00f4-77382a2b7345" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.155 [INFO][5007] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.155 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.309 [INFO][5035] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.311 [INFO][5035] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.326 [INFO][5035] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.336 [WARNING][5035] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.336 [INFO][5035] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.338 [INFO][5035] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:57.341608 containerd[1782]: 2025-01-17 12:17:57.339 [INFO][5007] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:17:57.341608 containerd[1782]: time="2025-01-17T12:17:57.341441384Z" level=info msg="TearDown network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" successfully" Jan 17 12:17:57.341608 containerd[1782]: time="2025-01-17T12:17:57.341469884Z" level=info msg="StopPodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" returns successfully" Jan 17 12:17:57.342943 containerd[1782]: time="2025-01-17T12:17:57.342330603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-bprl2,Uid:f7589a2f-6276-44ab-9ea6-20277e8e0375,Namespace:calico-apiserver,Attempt:1,}" Jan 17 12:17:57.687716 systemd[1]: run-netns-cni\x2db152e59b\x2d08b4\x2d2a99\x2dcaed\x2d5b5747d884ae.mount: Deactivated successfully. Jan 17 12:17:57.687940 systemd[1]: run-netns-cni\x2d788d5690\x2d29bd\x2df81e\x2d00f4\x2d77382a2b7345.mount: Deactivated successfully. Jan 17 12:17:57.688070 systemd[1]: run-netns-cni\x2de380c10a\x2d072b\x2d700c\x2d172a\x2dca7402d81d8f.mount: Deactivated successfully. Jan 17 12:17:57.784891 systemd-networkd[1362]: cali959d4c9781c: Link UP Jan 17 12:17:57.785980 systemd-networkd[1362]: cali959d4c9781c: Gained carrier Jan 17 12:17:57.813288 kubelet[3449]: I0117 12:17:57.811030 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c9d78bb49-9nb88" podStartSLOduration=27.443203896 podStartE2EDuration="29.810956908s" podCreationTimestamp="2025-01-17 12:17:28 +0000 UTC" firstStartedPulling="2025-01-17 12:17:54.28024821 +0000 UTC m=+53.490850048" lastFinishedPulling="2025-01-17 12:17:56.648001222 +0000 UTC m=+55.858603060" observedRunningTime="2025-01-17 12:17:57.187983806 +0000 UTC m=+56.398585744" watchObservedRunningTime="2025-01-17 12:17:57.810956908 +0000 UTC m=+57.021558846" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.505 [INFO][5059] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0 coredns-76f75df574- kube-system 041daf5e-35eb-4040-afb1-513c992a1e08 783 0 2025-01-17 12:17:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-bcafed7e46 coredns-76f75df574-qj8p7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali959d4c9781c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.505 [INFO][5059] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.632 [INFO][5104] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" HandleID="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.663 [INFO][5104] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" HandleID="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f880), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-bcafed7e46", "pod":"coredns-76f75df574-qj8p7", "timestamp":"2025-01-17 12:17:57.630557846 +0000 UTC"}, Hostname:"ci-4081.3.0-a-bcafed7e46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.665 [INFO][5104] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.665 [INFO][5104] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.665 [INFO][5104] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-bcafed7e46' Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.668 [INFO][5104] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.682 [INFO][5104] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.704 [INFO][5104] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.709 [INFO][5104] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.715 [INFO][5104] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.715 [INFO][5104] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.722 [INFO][5104] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.733 [INFO][5104] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.752 [INFO][5104] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.130/26] block=192.168.114.128/26 handle="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.752 [INFO][5104] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.130/26] handle="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.753 [INFO][5104] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:57.825062 containerd[1782]: 2025-01-17 12:17:57.754 [INFO][5104] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.130/26] IPv6=[] ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" HandleID="k8s-pod-network.2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.833119 containerd[1782]: 2025-01-17 12:17:57.757 [INFO][5059] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"041daf5e-35eb-4040-afb1-513c992a1e08", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"", Pod:"coredns-76f75df574-qj8p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959d4c9781c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:57.833119 containerd[1782]: 2025-01-17 12:17:57.759 [INFO][5059] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.130/32] ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.833119 containerd[1782]: 2025-01-17 12:17:57.759 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali959d4c9781c ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.833119 containerd[1782]: 2025-01-17 12:17:57.787 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.833119 containerd[1782]: 2025-01-17 12:17:57.788 [INFO][5059] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"041daf5e-35eb-4040-afb1-513c992a1e08", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e", Pod:"coredns-76f75df574-qj8p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959d4c9781c", MAC:"3a:89:6f:45:4b:04", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:57.833119 containerd[1782]: 2025-01-17 12:17:57.814 [INFO][5059] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e" Namespace="kube-system" Pod="coredns-76f75df574-qj8p7" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:17:57.893879 systemd-networkd[1362]: cali43451ecfa20: Link UP Jan 17 12:17:57.896265 systemd-networkd[1362]: cali43451ecfa20: Gained carrier Jan 17 12:17:57.920352 containerd[1782]: time="2025-01-17T12:17:57.920069404Z" level=info msg="StopPodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\"" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.491 [INFO][5054] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0 coredns-76f75df574- kube-system d001a922-c53b-4b28-b857-cb021efc482d 782 0 2025-01-17 12:17:16 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-bcafed7e46 coredns-76f75df574-5w5bs eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali43451ecfa20 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.495 [INFO][5054] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.644 [INFO][5103] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" HandleID="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.669 [INFO][5103] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" HandleID="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318720), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-bcafed7e46", "pod":"coredns-76f75df574-5w5bs", "timestamp":"2025-01-17 12:17:57.644317148 +0000 UTC"}, Hostname:"ci-4081.3.0-a-bcafed7e46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.670 [INFO][5103] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.752 [INFO][5103] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.753 [INFO][5103] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-bcafed7e46' Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.757 [INFO][5103] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.769 [INFO][5103] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.793 [INFO][5103] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.808 [INFO][5103] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.830 [INFO][5103] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.830 [INFO][5103] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.839 [INFO][5103] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.850 [INFO][5103] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.865 [INFO][5103] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.131/26] block=192.168.114.128/26 handle="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.866 [INFO][5103] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.131/26] handle="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.866 [INFO][5103] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:57.933049 containerd[1782]: 2025-01-17 12:17:57.866 [INFO][5103] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.131/26] IPv6=[] ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" HandleID="k8s-pod-network.306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.936196 containerd[1782]: 2025-01-17 12:17:57.875 [INFO][5054] cni-plugin/k8s.go 386: Populated endpoint ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d001a922-c53b-4b28-b857-cb021efc482d", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"", Pod:"coredns-76f75df574-5w5bs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43451ecfa20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:57.936196 containerd[1782]: 2025-01-17 12:17:57.875 [INFO][5054] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.131/32] ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.936196 containerd[1782]: 2025-01-17 12:17:57.875 [INFO][5054] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43451ecfa20 ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.936196 containerd[1782]: 2025-01-17 12:17:57.893 [INFO][5054] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.936196 containerd[1782]: 2025-01-17 12:17:57.895 [INFO][5054] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d001a922-c53b-4b28-b857-cb021efc482d", ResourceVersion:"782", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c", Pod:"coredns-76f75df574-5w5bs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43451ecfa20", MAC:"42:02:1e:32:cb:86", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:57.936196 containerd[1782]: 2025-01-17 12:17:57.926 [INFO][5054] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c" Namespace="kube-system" Pod="coredns-76f75df574-5w5bs" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:17:57.951949 containerd[1782]: time="2025-01-17T12:17:57.948408426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:57.951949 containerd[1782]: time="2025-01-17T12:17:57.948497328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:57.951949 containerd[1782]: time="2025-01-17T12:17:57.948526529Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:57.951949 containerd[1782]: time="2025-01-17T12:17:57.948656132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.065918 systemd-networkd[1362]: caliba9fe41f334: Link UP Jan 17 12:17:58.069356 systemd-networkd[1362]: caliba9fe41f334: Gained carrier Jan 17 12:17:58.105562 containerd[1782]: time="2025-01-17T12:17:58.104433752Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:58.105562 containerd[1782]: time="2025-01-17T12:17:58.104515254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:58.105562 containerd[1782]: time="2025-01-17T12:17:58.104546555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.105562 containerd[1782]: time="2025-01-17T12:17:58.104697458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.540 [INFO][5082] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0 calico-apiserver-6c9d78bb49- calico-apiserver f7589a2f-6276-44ab-9ea6-20277e8e0375 784 0 2025-01-17 12:17:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:6c9d78bb49 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-bcafed7e46 calico-apiserver-6c9d78bb49-bprl2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliba9fe41f334 [] []}} ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.540 [INFO][5082] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.699 [INFO][5111] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" HandleID="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.724 [INFO][5111] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" HandleID="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a1210), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-bcafed7e46", "pod":"calico-apiserver-6c9d78bb49-bprl2", "timestamp":"2025-01-17 12:17:57.699157953 +0000 UTC"}, Hostname:"ci-4081.3.0-a-bcafed7e46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.724 [INFO][5111] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.867 [INFO][5111] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.869 [INFO][5111] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-bcafed7e46' Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.879 [INFO][5111] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.899 [INFO][5111] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.929 [INFO][5111] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.941 [INFO][5111] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.960 [INFO][5111] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.960 [INFO][5111] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.968 [INFO][5111] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6 Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:57.985 [INFO][5111] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:58.011 [INFO][5111] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.132/26] block=192.168.114.128/26 handle="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:58.012 [INFO][5111] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.132/26] handle="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:58.012 [INFO][5111] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:58.116293 containerd[1782]: 2025-01-17 12:17:58.012 [INFO][5111] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.132/26] IPv6=[] ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" HandleID="k8s-pod-network.7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.117545 containerd[1782]: 2025-01-17 12:17:58.028 [INFO][5082] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7589a2f-6276-44ab-9ea6-20277e8e0375", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"", Pod:"calico-apiserver-6c9d78bb49-bprl2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba9fe41f334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:58.117545 containerd[1782]: 2025-01-17 12:17:58.028 [INFO][5082] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.132/32] ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.117545 containerd[1782]: 2025-01-17 12:17:58.028 [INFO][5082] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba9fe41f334 ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.117545 containerd[1782]: 2025-01-17 12:17:58.076 [INFO][5082] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.117545 containerd[1782]: 2025-01-17 12:17:58.081 [INFO][5082] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7589a2f-6276-44ab-9ea6-20277e8e0375", ResourceVersion:"784", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6", Pod:"calico-apiserver-6c9d78bb49-bprl2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba9fe41f334", MAC:"02:99:d1:57:0f:4d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:58.117545 containerd[1782]: 2025-01-17 12:17:58.108 [INFO][5082] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6" Namespace="calico-apiserver" Pod="calico-apiserver-6c9d78bb49-bprl2" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:17:58.151492 containerd[1782]: time="2025-01-17T12:17:58.151130078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-qj8p7,Uid:041daf5e-35eb-4040-afb1-513c992a1e08,Namespace:kube-system,Attempt:1,} returns sandbox id \"2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e\"" Jan 17 12:17:58.170106 containerd[1782]: time="2025-01-17T12:17:58.169579483Z" level=info msg="CreateContainer within sandbox \"2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:17:58.213850 systemd-networkd[1362]: cali9ce69d15bec: Link UP Jan 17 12:17:58.222002 systemd-networkd[1362]: cali9ce69d15bec: Gained carrier Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:57.628 [INFO][5083] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0 calico-kube-controllers-5c8fcdbb86- calico-system 10fd51e2-809c-4e12-8a73-1d0eb211a996 785 0 2025-01-17 12:17:29 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5c8fcdbb86 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-bcafed7e46 calico-kube-controllers-5c8fcdbb86-bxf5p eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali9ce69d15bec [] []}} ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:57.628 [INFO][5083] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:57.739 [INFO][5121] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" HandleID="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:57.757 [INFO][5121] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" HandleID="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319e10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-bcafed7e46", "pod":"calico-kube-controllers-5c8fcdbb86-bxf5p", "timestamp":"2025-01-17 12:17:57.739019928 +0000 UTC"}, Hostname:"ci-4081.3.0-a-bcafed7e46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:57.759 [INFO][5121] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.013 [INFO][5121] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.014 [INFO][5121] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-bcafed7e46' Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.027 [INFO][5121] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.082 [INFO][5121] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.107 [INFO][5121] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.115 [INFO][5121] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.143 [INFO][5121] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.144 [INFO][5121] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.153 [INFO][5121] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7 Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.167 [INFO][5121] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.187 [INFO][5121] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.133/26] block=192.168.114.128/26 handle="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.187 [INFO][5121] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.133/26] handle="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.187 [INFO][5121] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:58.269697 containerd[1782]: 2025-01-17 12:17:58.187 [INFO][5121] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.133/26] IPv6=[] ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" HandleID="k8s-pod-network.2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.273598 containerd[1782]: 2025-01-17 12:17:58.191 [INFO][5083] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0", GenerateName:"calico-kube-controllers-5c8fcdbb86-", Namespace:"calico-system", SelfLink:"", UID:"10fd51e2-809c-4e12-8a73-1d0eb211a996", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c8fcdbb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"", Pod:"calico-kube-controllers-5c8fcdbb86-bxf5p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ce69d15bec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:58.273598 containerd[1782]: 2025-01-17 12:17:58.191 [INFO][5083] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.133/32] ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.273598 containerd[1782]: 2025-01-17 12:17:58.191 [INFO][5083] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9ce69d15bec ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.273598 containerd[1782]: 2025-01-17 12:17:58.224 [INFO][5083] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.273598 containerd[1782]: 2025-01-17 12:17:58.230 [INFO][5083] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0", GenerateName:"calico-kube-controllers-5c8fcdbb86-", Namespace:"calico-system", SelfLink:"", UID:"10fd51e2-809c-4e12-8a73-1d0eb211a996", ResourceVersion:"785", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c8fcdbb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7", Pod:"calico-kube-controllers-5c8fcdbb86-bxf5p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ce69d15bec", MAC:"76:07:3a:20:6c:77", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:58.273598 containerd[1782]: 2025-01-17 12:17:58.252 [INFO][5083] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7" Namespace="calico-system" Pod="calico-kube-controllers-5c8fcdbb86-bxf5p" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:17:58.304499 containerd[1782]: time="2025-01-17T12:17:58.303276519Z" level=info msg="CreateContainer within sandbox \"2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"794f6e8165e8dd6b7e68094dbbc3ff29e4c8bee7440496c38e121407c1218764\"" Jan 17 12:17:58.305723 containerd[1782]: time="2025-01-17T12:17:58.305604470Z" level=info msg="StartContainer for \"794f6e8165e8dd6b7e68094dbbc3ff29e4c8bee7440496c38e121407c1218764\"" Jan 17 12:17:58.315792 containerd[1782]: time="2025-01-17T12:17:58.314363862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:58.315792 containerd[1782]: time="2025-01-17T12:17:58.314438764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:58.315792 containerd[1782]: time="2025-01-17T12:17:58.314454764Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.315792 containerd[1782]: time="2025-01-17T12:17:58.314569267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.429524 containerd[1782]: time="2025-01-17T12:17:58.428193962Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:58.429524 containerd[1782]: time="2025-01-17T12:17:58.428265164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:58.429524 containerd[1782]: time="2025-01-17T12:17:58.428288664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.429524 containerd[1782]: time="2025-01-17T12:17:58.428388266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:58.448872 containerd[1782]: time="2025-01-17T12:17:58.448822015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-5w5bs,Uid:d001a922-c53b-4b28-b857-cb021efc482d,Namespace:kube-system,Attempt:1,} returns sandbox id \"306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c\"" Jan 17 12:17:58.456739 containerd[1782]: time="2025-01-17T12:17:58.456695188Z" level=info msg="CreateContainer within sandbox \"306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.383 [INFO][5204] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.391 [INFO][5204] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" iface="eth0" netns="/var/run/netns/cni-3c7ddcea-fd45-4ffa-bcff-f9403f047b85" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.397 [INFO][5204] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" iface="eth0" netns="/var/run/netns/cni-3c7ddcea-fd45-4ffa-bcff-f9403f047b85" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.398 [INFO][5204] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" iface="eth0" netns="/var/run/netns/cni-3c7ddcea-fd45-4ffa-bcff-f9403f047b85" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.398 [INFO][5204] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.398 [INFO][5204] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.459 [INFO][5328] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.461 [INFO][5328] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.461 [INFO][5328] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.471 [WARNING][5328] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.471 [INFO][5328] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.472 [INFO][5328] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:58.483189 containerd[1782]: 2025-01-17 12:17:58.477 [INFO][5204] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:17:58.483189 containerd[1782]: time="2025-01-17T12:17:58.481956243Z" level=info msg="TearDown network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" successfully" Jan 17 12:17:58.483189 containerd[1782]: time="2025-01-17T12:17:58.482065145Z" level=info msg="StopPodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" returns successfully" Jan 17 12:17:58.491622 containerd[1782]: time="2025-01-17T12:17:58.483624979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8wmfr,Uid:a3abf411-90ba-45ad-b3b8-494831f9b2d4,Namespace:calico-system,Attempt:1,}" Jan 17 12:17:58.553882 containerd[1782]: time="2025-01-17T12:17:58.553174707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c9d78bb49-bprl2,Uid:f7589a2f-6276-44ab-9ea6-20277e8e0375,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6\"" Jan 17 12:17:58.560703 containerd[1782]: time="2025-01-17T12:17:58.560315663Z" level=info msg="CreateContainer within sandbox \"306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1fa4124fd5ab0157682ba23edcfebc27068d88cceaa923869d97489e712eae74\"" Jan 17 12:17:58.562009 containerd[1782]: time="2025-01-17T12:17:58.561260684Z" level=info msg="StartContainer for \"1fa4124fd5ab0157682ba23edcfebc27068d88cceaa923869d97489e712eae74\"" Jan 17 12:17:58.566323 containerd[1782]: time="2025-01-17T12:17:58.566293495Z" level=info msg="CreateContainer within sandbox \"7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 17 12:17:58.599910 containerd[1782]: time="2025-01-17T12:17:58.599165416Z" level=info msg="StartContainer for \"794f6e8165e8dd6b7e68094dbbc3ff29e4c8bee7440496c38e121407c1218764\" returns successfully" Jan 17 12:17:58.631055 containerd[1782]: time="2025-01-17T12:17:58.631003216Z" level=info msg="CreateContainer within sandbox \"7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"62b60f517da3f1ef5356752fd4b29995a4cb54b2c1909f9f96099a6dd856c451\"" Jan 17 12:17:58.633773 containerd[1782]: time="2025-01-17T12:17:58.633722375Z" level=info msg="StartContainer for \"62b60f517da3f1ef5356752fd4b29995a4cb54b2c1909f9f96099a6dd856c451\"" Jan 17 12:17:58.705462 systemd[1]: run-netns-cni\x2d3c7ddcea\x2dfd45\x2d4ffa\x2dbcff\x2df9403f047b85.mount: Deactivated successfully. Jan 17 12:17:58.731044 containerd[1782]: time="2025-01-17T12:17:58.727996045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5c8fcdbb86-bxf5p,Uid:10fd51e2-809c-4e12-8a73-1d0eb211a996,Namespace:calico-system,Attempt:1,} returns sandbox id \"2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7\"" Jan 17 12:17:58.740200 containerd[1782]: time="2025-01-17T12:17:58.738955686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 17 12:17:58.851344 containerd[1782]: time="2025-01-17T12:17:58.851286253Z" level=info msg="StartContainer for \"1fa4124fd5ab0157682ba23edcfebc27068d88cceaa923869d97489e712eae74\" returns successfully" Jan 17 12:17:58.975271 containerd[1782]: time="2025-01-17T12:17:58.975127172Z" level=info msg="StartContainer for \"62b60f517da3f1ef5356752fd4b29995a4cb54b2c1909f9f96099a6dd856c451\" returns successfully" Jan 17 12:17:58.992167 systemd-networkd[1362]: cali959d4c9781c: Gained IPv6LL Jan 17 12:17:59.066364 systemd-networkd[1362]: calib8473fcdae9: Link UP Jan 17 12:17:59.066743 systemd-networkd[1362]: calib8473fcdae9: Gained carrier Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.835 [INFO][5407] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0 csi-node-driver- calico-system a3abf411-90ba-45ad-b3b8-494831f9b2d4 807 0 2025-01-17 12:17:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-bcafed7e46 csi-node-driver-8wmfr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib8473fcdae9 [] []}} ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.835 [INFO][5407] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.934 [INFO][5486] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" HandleID="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.963 [INFO][5486] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" HandleID="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003a9b60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-bcafed7e46", "pod":"csi-node-driver-8wmfr", "timestamp":"2025-01-17 12:17:58.93451138 +0000 UTC"}, Hostname:"ci-4081.3.0-a-bcafed7e46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.963 [INFO][5486] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.963 [INFO][5486] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.963 [INFO][5486] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-bcafed7e46' Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.965 [INFO][5486] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.978 [INFO][5486] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:58.990 [INFO][5486] ipam/ipam.go 489: Trying affinity for 192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.000 [INFO][5486] ipam/ipam.go 155: Attempting to load block cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.005 [INFO][5486] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.114.128/26 host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.007 [INFO][5486] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.114.128/26 handle="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.009 [INFO][5486] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105 Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.020 [INFO][5486] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.114.128/26 handle="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.036 [INFO][5486] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.114.134/26] block=192.168.114.128/26 handle="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.036 [INFO][5486] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.114.134/26] handle="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" host="ci-4081.3.0-a-bcafed7e46" Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.036 [INFO][5486] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:17:59.093888 containerd[1782]: 2025-01-17 12:17:59.036 [INFO][5486] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.114.134/26] IPv6=[] ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" HandleID="k8s-pod-network.7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.095588 containerd[1782]: 2025-01-17 12:17:59.048 [INFO][5407] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3abf411-90ba-45ad-b3b8-494831f9b2d4", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"", Pod:"csi-node-driver-8wmfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib8473fcdae9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:59.095588 containerd[1782]: 2025-01-17 12:17:59.049 [INFO][5407] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.114.134/32] ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.095588 containerd[1782]: 2025-01-17 12:17:59.050 [INFO][5407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib8473fcdae9 ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.095588 containerd[1782]: 2025-01-17 12:17:59.068 [INFO][5407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.095588 containerd[1782]: 2025-01-17 12:17:59.069 [INFO][5407] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3abf411-90ba-45ad-b3b8-494831f9b2d4", ResourceVersion:"807", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105", Pod:"csi-node-driver-8wmfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib8473fcdae9", MAC:"2a:26:5b:08:23:82", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:17:59.095588 containerd[1782]: 2025-01-17 12:17:59.088 [INFO][5407] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105" Namespace="calico-system" Pod="csi-node-driver-8wmfr" WorkloadEndpoint="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:17:59.160575 containerd[1782]: time="2025-01-17T12:17:59.160233537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 12:17:59.160575 containerd[1782]: time="2025-01-17T12:17:59.160308539Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 12:17:59.160575 containerd[1782]: time="2025-01-17T12:17:59.160330839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:59.161543 containerd[1782]: time="2025-01-17T12:17:59.161475464Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 12:17:59.211388 kubelet[3449]: I0117 12:17:59.210694 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-5w5bs" podStartSLOduration=43.210646644 podStartE2EDuration="43.210646644s" podCreationTimestamp="2025-01-17 12:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:59.209973729 +0000 UTC m=+58.420575667" watchObservedRunningTime="2025-01-17 12:17:59.210646644 +0000 UTC m=+58.421248482" Jan 17 12:17:59.252287 kubelet[3449]: I0117 12:17:59.251018 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-6c9d78bb49-bprl2" podStartSLOduration=31.250951529 podStartE2EDuration="31.250951529s" podCreationTimestamp="2025-01-17 12:17:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:59.229285454 +0000 UTC m=+58.439887392" watchObservedRunningTime="2025-01-17 12:17:59.250951529 +0000 UTC m=+58.461553467" Jan 17 12:17:59.287370 kubelet[3449]: I0117 12:17:59.287244 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-qj8p7" podStartSLOduration=43.287189325 podStartE2EDuration="43.287189325s" podCreationTimestamp="2025-01-17 12:17:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-17 12:17:59.285186581 +0000 UTC m=+58.495788419" watchObservedRunningTime="2025-01-17 12:17:59.287189325 +0000 UTC m=+58.497791263" Jan 17 12:17:59.325630 containerd[1782]: time="2025-01-17T12:17:59.325575168Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-8wmfr,Uid:a3abf411-90ba-45ad-b3b8-494831f9b2d4,Namespace:calico-system,Attempt:1,} returns sandbox id \"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105\"" Jan 17 12:17:59.824058 systemd-networkd[1362]: cali43451ecfa20: Gained IPv6LL Jan 17 12:17:59.827756 systemd-networkd[1362]: caliba9fe41f334: Gained IPv6LL Jan 17 12:17:59.952066 systemd-networkd[1362]: cali9ce69d15bec: Gained IPv6LL Jan 17 12:18:00.211228 kubelet[3449]: I0117 12:18:00.211192 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:00.656145 systemd-networkd[1362]: calib8473fcdae9: Gained IPv6LL Jan 17 12:18:00.922034 containerd[1782]: time="2025-01-17T12:18:00.921861622Z" level=info msg="StopPodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\"" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.959 [WARNING][5585] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0", GenerateName:"calico-kube-controllers-5c8fcdbb86-", Namespace:"calico-system", SelfLink:"", UID:"10fd51e2-809c-4e12-8a73-1d0eb211a996", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c8fcdbb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7", Pod:"calico-kube-controllers-5c8fcdbb86-bxf5p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ce69d15bec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.959 [INFO][5585] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.959 [INFO][5585] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" iface="eth0" netns="" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.959 [INFO][5585] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.959 [INFO][5585] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.986 [INFO][5591] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.987 [INFO][5591] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.987 [INFO][5591] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.997 [WARNING][5591] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.997 [INFO][5591] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:00.998 [INFO][5591] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.003830 containerd[1782]: 2025-01-17 12:18:01.000 [INFO][5585] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.003830 containerd[1782]: time="2025-01-17T12:18:01.002985003Z" level=info msg="TearDown network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" successfully" Jan 17 12:18:01.003830 containerd[1782]: time="2025-01-17T12:18:01.003016804Z" level=info msg="StopPodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" returns successfully" Jan 17 12:18:01.003830 containerd[1782]: time="2025-01-17T12:18:01.003509815Z" level=info msg="RemovePodSandbox for \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\"" Jan 17 12:18:01.003830 containerd[1782]: time="2025-01-17T12:18:01.003544815Z" level=info msg="Forcibly stopping sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\"" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.074 [WARNING][5609] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0", GenerateName:"calico-kube-controllers-5c8fcdbb86-", Namespace:"calico-system", SelfLink:"", UID:"10fd51e2-809c-4e12-8a73-1d0eb211a996", ResourceVersion:"805", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5c8fcdbb86", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7", Pod:"calico-kube-controllers-5c8fcdbb86-bxf5p", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.114.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali9ce69d15bec", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.075 [INFO][5609] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.075 [INFO][5609] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" iface="eth0" netns="" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.075 [INFO][5609] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.075 [INFO][5609] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.106 [INFO][5615] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.107 [INFO][5615] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.107 [INFO][5615] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.116 [WARNING][5615] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.116 [INFO][5615] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" HandleID="k8s-pod-network.c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--kube--controllers--5c8fcdbb86--bxf5p-eth0" Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.120 [INFO][5615] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.125097 containerd[1782]: 2025-01-17 12:18:01.123 [INFO][5609] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec" Jan 17 12:18:01.127377 containerd[1782]: time="2025-01-17T12:18:01.125597396Z" level=info msg="TearDown network for sandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" successfully" Jan 17 12:18:01.134883 containerd[1782]: time="2025-01-17T12:18:01.134829298Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:01.135146 containerd[1782]: time="2025-01-17T12:18:01.135127205Z" level=info msg="RemovePodSandbox \"c67f590450ba049492d15a4fa4de0a7cecbd2d3e840e419b62c72277624628ec\" returns successfully" Jan 17 12:18:01.135922 containerd[1782]: time="2025-01-17T12:18:01.135894022Z" level=info msg="StopPodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\"" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.244 [WARNING][5634] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7589a2f-6276-44ab-9ea6-20277e8e0375", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6", Pod:"calico-apiserver-6c9d78bb49-bprl2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba9fe41f334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.244 [INFO][5634] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.244 [INFO][5634] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" iface="eth0" netns="" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.244 [INFO][5634] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.244 [INFO][5634] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.276 [INFO][5644] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.277 [INFO][5644] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.277 [INFO][5644] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.284 [WARNING][5644] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.284 [INFO][5644] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.286 [INFO][5644] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.289885 containerd[1782]: 2025-01-17 12:18:01.287 [INFO][5634] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.289885 containerd[1782]: time="2025-01-17T12:18:01.289595097Z" level=info msg="TearDown network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" successfully" Jan 17 12:18:01.289885 containerd[1782]: time="2025-01-17T12:18:01.289648098Z" level=info msg="StopPodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" returns successfully" Jan 17 12:18:01.292712 containerd[1782]: time="2025-01-17T12:18:01.292098052Z" level=info msg="RemovePodSandbox for \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\"" Jan 17 12:18:01.292712 containerd[1782]: time="2025-01-17T12:18:01.292244955Z" level=info msg="Forcibly stopping sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\"" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.349 [WARNING][5663] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"f7589a2f-6276-44ab-9ea6-20277e8e0375", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"7c381ed0285f9a3dcc304075385db52bbde84501821d7d06c01a812cd39305e6", Pod:"calico-apiserver-6c9d78bb49-bprl2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliba9fe41f334", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.350 [INFO][5663] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.350 [INFO][5663] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" iface="eth0" netns="" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.350 [INFO][5663] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.350 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.372 [INFO][5669] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.372 [INFO][5669] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.372 [INFO][5669] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.378 [WARNING][5669] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.378 [INFO][5669] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" HandleID="k8s-pod-network.ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--bprl2-eth0" Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.381 [INFO][5669] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.384162 containerd[1782]: 2025-01-17 12:18:01.382 [INFO][5663] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9" Jan 17 12:18:01.384852 containerd[1782]: time="2025-01-17T12:18:01.384219975Z" level=info msg="TearDown network for sandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" successfully" Jan 17 12:18:01.481948 containerd[1782]: time="2025-01-17T12:18:01.481894720Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:01.482136 containerd[1782]: time="2025-01-17T12:18:01.481985422Z" level=info msg="RemovePodSandbox \"ce63a03f819aaf40ac1c083ca7215f204b4c0c7a8d2d30a89f74ea23db4180f9\" returns successfully" Jan 17 12:18:01.483179 containerd[1782]: time="2025-01-17T12:18:01.483008644Z" level=info msg="StopPodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\"" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.564 [WARNING][5693] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"041daf5e-35eb-4040-afb1-513c992a1e08", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e", Pod:"coredns-76f75df574-qj8p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959d4c9781c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.564 [INFO][5693] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.564 [INFO][5693] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" iface="eth0" netns="" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.564 [INFO][5693] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.565 [INFO][5693] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.611 [INFO][5700] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.611 [INFO][5700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.611 [INFO][5700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.621 [WARNING][5700] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.621 [INFO][5700] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.623 [INFO][5700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.627303 containerd[1782]: 2025-01-17 12:18:01.625 [INFO][5693] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.628558 containerd[1782]: time="2025-01-17T12:18:01.627843225Z" level=info msg="TearDown network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" successfully" Jan 17 12:18:01.628558 containerd[1782]: time="2025-01-17T12:18:01.627884525Z" level=info msg="StopPodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" returns successfully" Jan 17 12:18:01.628905 containerd[1782]: time="2025-01-17T12:18:01.628831346Z" level=info msg="RemovePodSandbox for \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\"" Jan 17 12:18:01.628905 containerd[1782]: time="2025-01-17T12:18:01.628876947Z" level=info msg="Forcibly stopping sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\"" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.692 [WARNING][5719] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"041daf5e-35eb-4040-afb1-513c992a1e08", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"2fdbdd468a40ea698405785b5b052440e77383542363b43dbca631f438f0299e", Pod:"coredns-76f75df574-qj8p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali959d4c9781c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.692 [INFO][5719] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.692 [INFO][5719] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" iface="eth0" netns="" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.692 [INFO][5719] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.692 [INFO][5719] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.747 [INFO][5728] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.747 [INFO][5728] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.747 [INFO][5728] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.756 [WARNING][5728] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.756 [INFO][5728] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" HandleID="k8s-pod-network.7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--qj8p7-eth0" Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.760 [INFO][5728] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.763996 containerd[1782]: 2025-01-17 12:18:01.762 [INFO][5719] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf" Jan 17 12:18:01.764886 containerd[1782]: time="2025-01-17T12:18:01.764055816Z" level=info msg="TearDown network for sandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" successfully" Jan 17 12:18:01.775239 containerd[1782]: time="2025-01-17T12:18:01.775174860Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:01.775400 containerd[1782]: time="2025-01-17T12:18:01.775260562Z" level=info msg="RemovePodSandbox \"7fb6d185efbfd1fe3e9b588fc4dd3e3d4dff94aa1fd26c5a31be0717ef02bacf\" returns successfully" Jan 17 12:18:01.776403 containerd[1782]: time="2025-01-17T12:18:01.776020578Z" level=info msg="StopPodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\"" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.835 [WARNING][5750] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"24682ad7-0945-4efc-b49b-c11c02b2d640", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14", Pod:"calico-apiserver-6c9d78bb49-9nb88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib526747e8d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.836 [INFO][5750] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.836 [INFO][5750] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" iface="eth0" netns="" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.836 [INFO][5750] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.836 [INFO][5750] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.869 [INFO][5756] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.869 [INFO][5756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.870 [INFO][5756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.881 [WARNING][5756] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.882 [INFO][5756] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.885 [INFO][5756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:01.889949 containerd[1782]: 2025-01-17 12:18:01.887 [INFO][5750] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:01.891164 containerd[1782]: time="2025-01-17T12:18:01.890434391Z" level=info msg="TearDown network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" successfully" Jan 17 12:18:01.891164 containerd[1782]: time="2025-01-17T12:18:01.890588794Z" level=info msg="StopPodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" returns successfully" Jan 17 12:18:01.892626 containerd[1782]: time="2025-01-17T12:18:01.891282210Z" level=info msg="RemovePodSandbox for \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\"" Jan 17 12:18:01.892626 containerd[1782]: time="2025-01-17T12:18:01.891320010Z" level=info msg="Forcibly stopping sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\"" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.949 [WARNING][5774] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0", GenerateName:"calico-apiserver-6c9d78bb49-", Namespace:"calico-apiserver", SelfLink:"", UID:"24682ad7-0945-4efc-b49b-c11c02b2d640", ResourceVersion:"813", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"6c9d78bb49", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"1d029d9ed36c3e9433f15542239df5dc0cf329ab7c68aad448b5c63969f1cf14", Pod:"calico-apiserver-6c9d78bb49-9nb88", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.114.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib526747e8d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.949 [INFO][5774] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.950 [INFO][5774] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" iface="eth0" netns="" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.950 [INFO][5774] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.950 [INFO][5774] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.987 [INFO][5781] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.988 [INFO][5781] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.988 [INFO][5781] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.996 [WARNING][5781] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:01.997 [INFO][5781] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" HandleID="k8s-pod-network.da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Workload="ci--4081.3.0--a--bcafed7e46-k8s-calico--apiserver--6c9d78bb49--9nb88-eth0" Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:02.000 [INFO][5781] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:02.004436 containerd[1782]: 2025-01-17 12:18:02.002 [INFO][5774] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33" Jan 17 12:18:02.005452 containerd[1782]: time="2025-01-17T12:18:02.004490096Z" level=info msg="TearDown network for sandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" successfully" Jan 17 12:18:02.017702 containerd[1782]: time="2025-01-17T12:18:02.016926669Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:02.017702 containerd[1782]: time="2025-01-17T12:18:02.017027771Z" level=info msg="RemovePodSandbox \"da8520370d0f98bcfa6fa8aeb05560ecf70d2824553bdf1a2f69228e8352db33\" returns successfully" Jan 17 12:18:02.018861 containerd[1782]: time="2025-01-17T12:18:02.018703008Z" level=info msg="StopPodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\"" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.149 [WARNING][5799] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3abf411-90ba-45ad-b3b8-494831f9b2d4", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105", Pod:"csi-node-driver-8wmfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib8473fcdae9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.150 [INFO][5799] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.150 [INFO][5799] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" iface="eth0" netns="" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.150 [INFO][5799] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.150 [INFO][5799] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.198 [INFO][5806] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.198 [INFO][5806] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.198 [INFO][5806] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.209 [WARNING][5806] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.209 [INFO][5806] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.215 [INFO][5806] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:02.219881 containerd[1782]: 2025-01-17 12:18:02.217 [INFO][5799] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.222770 containerd[1782]: time="2025-01-17T12:18:02.219715022Z" level=info msg="TearDown network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" successfully" Jan 17 12:18:02.222770 containerd[1782]: time="2025-01-17T12:18:02.219973027Z" level=info msg="StopPodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" returns successfully" Jan 17 12:18:02.222770 containerd[1782]: time="2025-01-17T12:18:02.221380758Z" level=info msg="RemovePodSandbox for \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\"" Jan 17 12:18:02.222770 containerd[1782]: time="2025-01-17T12:18:02.221429759Z" level=info msg="Forcibly stopping sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\"" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.288 [WARNING][5824] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a3abf411-90ba-45ad-b3b8-494831f9b2d4", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105", Pod:"csi-node-driver-8wmfr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.114.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib8473fcdae9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.288 [INFO][5824] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.289 [INFO][5824] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" iface="eth0" netns="" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.289 [INFO][5824] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.289 [INFO][5824] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.324 [INFO][5830] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.324 [INFO][5830] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.324 [INFO][5830] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.335 [WARNING][5830] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.335 [INFO][5830] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" HandleID="k8s-pod-network.74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Workload="ci--4081.3.0--a--bcafed7e46-k8s-csi--node--driver--8wmfr-eth0" Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.336 [INFO][5830] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:02.340138 containerd[1782]: 2025-01-17 12:18:02.338 [INFO][5824] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8" Jan 17 12:18:02.341808 containerd[1782]: time="2025-01-17T12:18:02.340956184Z" level=info msg="TearDown network for sandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" successfully" Jan 17 12:18:02.351897 containerd[1782]: time="2025-01-17T12:18:02.351710620Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:02.352275 containerd[1782]: time="2025-01-17T12:18:02.352248732Z" level=info msg="RemovePodSandbox \"74641161a99a75a3f146c691ab6336bef7e7c116136448ce63b9ffd2049983a8\" returns successfully" Jan 17 12:18:02.354672 containerd[1782]: time="2025-01-17T12:18:02.354184175Z" level=info msg="StopPodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\"" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.417 [WARNING][5848] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d001a922-c53b-4b28-b857-cb021efc482d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c", Pod:"coredns-76f75df574-5w5bs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43451ecfa20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.418 [INFO][5848] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.418 [INFO][5848] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" iface="eth0" netns="" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.418 [INFO][5848] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.418 [INFO][5848] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.458 [INFO][5856] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.458 [INFO][5856] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.458 [INFO][5856] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.466 [WARNING][5856] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.466 [INFO][5856] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.467 [INFO][5856] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:02.471777 containerd[1782]: 2025-01-17 12:18:02.469 [INFO][5848] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.471777 containerd[1782]: time="2025-01-17T12:18:02.471590153Z" level=info msg="TearDown network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" successfully" Jan 17 12:18:02.471777 containerd[1782]: time="2025-01-17T12:18:02.471624854Z" level=info msg="StopPodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" returns successfully" Jan 17 12:18:02.473197 containerd[1782]: time="2025-01-17T12:18:02.473165687Z" level=info msg="RemovePodSandbox for \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\"" Jan 17 12:18:02.473348 containerd[1782]: time="2025-01-17T12:18:02.473202788Z" level=info msg="Forcibly stopping sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\"" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.529 [WARNING][5874] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"d001a922-c53b-4b28-b857-cb021efc482d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 17, 12, 17, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-bcafed7e46", ContainerID:"306024462f3eb8e92506f005ceb01437a6bdacd974234e67c0659f1da8c5003c", Pod:"coredns-76f75df574-5w5bs", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.114.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali43451ecfa20", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.529 [INFO][5874] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.529 [INFO][5874] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" iface="eth0" netns="" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.529 [INFO][5874] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.529 [INFO][5874] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.566 [INFO][5880] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.567 [INFO][5880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.567 [INFO][5880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.577 [WARNING][5880] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.577 [INFO][5880] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" HandleID="k8s-pod-network.03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Workload="ci--4081.3.0--a--bcafed7e46-k8s-coredns--76f75df574--5w5bs-eth0" Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.579 [INFO][5880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 17 12:18:02.584752 containerd[1782]: 2025-01-17 12:18:02.582 [INFO][5874] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2" Jan 17 12:18:02.585878 containerd[1782]: time="2025-01-17T12:18:02.585530055Z" level=info msg="TearDown network for sandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" successfully" Jan 17 12:18:02.594890 containerd[1782]: time="2025-01-17T12:18:02.594833959Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 12:18:02.595007 containerd[1782]: time="2025-01-17T12:18:02.594928861Z" level=info msg="RemovePodSandbox \"03f6b827f3130ad915c6aa60570882a04d7d3bee80976ff9f5e9e5aeadfe68f2\" returns successfully" Jan 17 12:18:02.649013 containerd[1782]: time="2025-01-17T12:18:02.648953648Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:02.651239 containerd[1782]: time="2025-01-17T12:18:02.651175796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 17 12:18:02.655551 containerd[1782]: time="2025-01-17T12:18:02.655473191Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:02.660446 containerd[1782]: time="2025-01-17T12:18:02.660386299Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:02.661613 containerd[1782]: time="2025-01-17T12:18:02.661119715Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.919570572s" Jan 17 12:18:02.661613 containerd[1782]: time="2025-01-17T12:18:02.661162116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 17 12:18:02.663400 containerd[1782]: time="2025-01-17T12:18:02.663016256Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 17 12:18:02.680263 containerd[1782]: time="2025-01-17T12:18:02.680178733Z" level=info msg="CreateContainer within sandbox \"2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 17 12:18:02.723382 containerd[1782]: time="2025-01-17T12:18:02.723257079Z" level=info msg="CreateContainer within sandbox \"2fdce2b4c275173124e3ff7648bd3c7453be835d87699daf2d9398d1522816b7\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"1aa50a82a1cfc11b5387b3793e5d0c367d4f8c4ef4cc237f79b4c80425e37e07\"" Jan 17 12:18:02.724505 containerd[1782]: time="2025-01-17T12:18:02.724397704Z" level=info msg="StartContainer for \"1aa50a82a1cfc11b5387b3793e5d0c367d4f8c4ef4cc237f79b4c80425e37e07\"" Jan 17 12:18:02.831554 containerd[1782]: time="2025-01-17T12:18:02.831496556Z" level=info msg="StartContainer for \"1aa50a82a1cfc11b5387b3793e5d0c367d4f8c4ef4cc237f79b4c80425e37e07\" returns successfully" Jan 17 12:18:03.269062 kubelet[3449]: I0117 12:18:03.267106 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5c8fcdbb86-bxf5p" podStartSLOduration=30.341838825 podStartE2EDuration="34.26703142s" podCreationTimestamp="2025-01-17 12:17:29 +0000 UTC" firstStartedPulling="2025-01-17 12:17:58.73641323 +0000 UTC m=+57.947015168" lastFinishedPulling="2025-01-17 12:18:02.661605925 +0000 UTC m=+61.872207763" observedRunningTime="2025-01-17 12:18:03.263469442 +0000 UTC m=+62.474071280" watchObservedRunningTime="2025-01-17 12:18:03.26703142 +0000 UTC m=+62.477633358" Jan 17 12:18:03.983232 containerd[1782]: time="2025-01-17T12:18:03.983175646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:03.985192 containerd[1782]: time="2025-01-17T12:18:03.985103989Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 17 12:18:03.989415 containerd[1782]: time="2025-01-17T12:18:03.989381783Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:03.993811 containerd[1782]: time="2025-01-17T12:18:03.993739878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:03.994536 containerd[1782]: time="2025-01-17T12:18:03.994381492Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.331324735s" Jan 17 12:18:03.994536 containerd[1782]: time="2025-01-17T12:18:03.994420893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 17 12:18:03.997369 containerd[1782]: time="2025-01-17T12:18:03.997121553Z" level=info msg="CreateContainer within sandbox \"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 17 12:18:04.031081 containerd[1782]: time="2025-01-17T12:18:04.031033497Z" level=info msg="CreateContainer within sandbox \"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"697ca34f540a2452201a1990b685ecf21e2146078643d9372ed3f462eadff4c2\"" Jan 17 12:18:04.032457 containerd[1782]: time="2025-01-17T12:18:04.031701012Z" level=info msg="StartContainer for \"697ca34f540a2452201a1990b685ecf21e2146078643d9372ed3f462eadff4c2\"" Jan 17 12:18:04.080071 systemd[1]: run-containerd-runc-k8s.io-697ca34f540a2452201a1990b685ecf21e2146078643d9372ed3f462eadff4c2-runc.ul0FTk.mount: Deactivated successfully. Jan 17 12:18:04.115246 containerd[1782]: time="2025-01-17T12:18:04.115205746Z" level=info msg="StartContainer for \"697ca34f540a2452201a1990b685ecf21e2146078643d9372ed3f462eadff4c2\" returns successfully" Jan 17 12:18:04.118089 containerd[1782]: time="2025-01-17T12:18:04.118051808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 17 12:18:05.662138 containerd[1782]: time="2025-01-17T12:18:05.662068213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:05.664299 containerd[1782]: time="2025-01-17T12:18:05.664219461Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 17 12:18:05.668911 containerd[1782]: time="2025-01-17T12:18:05.668858862Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:05.673950 containerd[1782]: time="2025-01-17T12:18:05.673872072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 12:18:05.674824 containerd[1782]: time="2025-01-17T12:18:05.674641489Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.556342476s" Jan 17 12:18:05.674824 containerd[1782]: time="2025-01-17T12:18:05.674695290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 17 12:18:05.677112 containerd[1782]: time="2025-01-17T12:18:05.677061542Z" level=info msg="CreateContainer within sandbox \"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 17 12:18:05.726311 containerd[1782]: time="2025-01-17T12:18:05.726248722Z" level=info msg="CreateContainer within sandbox \"7661ced99ed143fadacce34f9149aa0fffc41810124b07d6f0abd47916ca1105\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"44d29d2ced7b93e786708a53e3b857b8069adb9e1f553d0e13ce1c1e6566940a\"" Jan 17 12:18:05.727070 containerd[1782]: time="2025-01-17T12:18:05.727030739Z" level=info msg="StartContainer for \"44d29d2ced7b93e786708a53e3b857b8069adb9e1f553d0e13ce1c1e6566940a\"" Jan 17 12:18:05.778047 systemd[1]: run-containerd-runc-k8s.io-44d29d2ced7b93e786708a53e3b857b8069adb9e1f553d0e13ce1c1e6566940a-runc.TkzMw4.mount: Deactivated successfully. Jan 17 12:18:05.811978 containerd[1782]: time="2025-01-17T12:18:05.811795998Z" level=info msg="StartContainer for \"44d29d2ced7b93e786708a53e3b857b8069adb9e1f553d0e13ce1c1e6566940a\" returns successfully" Jan 17 12:18:06.023991 kubelet[3449]: I0117 12:18:06.023798 3449 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 17 12:18:06.023991 kubelet[3449]: I0117 12:18:06.023870 3449 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 17 12:18:06.282285 kubelet[3449]: I0117 12:18:06.280868 3449 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-8wmfr" podStartSLOduration=31.931768167 podStartE2EDuration="38.279126351s" podCreationTimestamp="2025-01-17 12:17:28 +0000 UTC" firstStartedPulling="2025-01-17 12:17:59.327839618 +0000 UTC m=+58.538441556" lastFinishedPulling="2025-01-17 12:18:05.675197902 +0000 UTC m=+64.885799740" observedRunningTime="2025-01-17 12:18:06.27859914 +0000 UTC m=+65.489200978" watchObservedRunningTime="2025-01-17 12:18:06.279126351 +0000 UTC m=+65.489728189" Jan 17 12:18:08.977286 systemd[1]: run-containerd-runc-k8s.io-0311b95879c2a55cfad190fe7892af301e8b0bfc5018560ed97dc8b8ce95c639-runc.he6sY9.mount: Deactivated successfully. Jan 17 12:18:33.648092 kubelet[3449]: I0117 12:18:33.647713 3449 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 17 12:18:59.510234 systemd[1]: Started sshd@7-10.200.8.43:22-10.200.16.10:39652.service - OpenSSH per-connection server daemon (10.200.16.10:39652). Jan 17 12:19:00.167734 sshd[6142]: Accepted publickey for core from 10.200.16.10 port 39652 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:00.169063 sshd[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:00.176260 systemd-logind[1758]: New session 10 of user core. Jan 17 12:19:00.183013 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 12:19:00.717461 sshd[6142]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:00.723189 systemd[1]: sshd@7-10.200.8.43:22-10.200.16.10:39652.service: Deactivated successfully. Jan 17 12:19:00.727923 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 12:19:00.728940 systemd-logind[1758]: Session 10 logged out. Waiting for processes to exit. Jan 17 12:19:00.730057 systemd-logind[1758]: Removed session 10. Jan 17 12:19:05.830107 systemd[1]: Started sshd@8-10.200.8.43:22-10.200.16.10:39666.service - OpenSSH per-connection server daemon (10.200.16.10:39666). Jan 17 12:19:06.476392 sshd[6159]: Accepted publickey for core from 10.200.16.10 port 39666 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:06.478043 sshd[6159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:06.482343 systemd-logind[1758]: New session 11 of user core. Jan 17 12:19:06.487229 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 12:19:07.004330 sshd[6159]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:07.008053 systemd[1]: sshd@8-10.200.8.43:22-10.200.16.10:39666.service: Deactivated successfully. Jan 17 12:19:07.015039 systemd-logind[1758]: Session 11 logged out. Waiting for processes to exit. Jan 17 12:19:07.015295 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 12:19:07.016832 systemd-logind[1758]: Removed session 11. Jan 17 12:19:12.117225 systemd[1]: Started sshd@9-10.200.8.43:22-10.200.16.10:45220.service - OpenSSH per-connection server daemon (10.200.16.10:45220). Jan 17 12:19:12.761433 sshd[6220]: Accepted publickey for core from 10.200.16.10 port 45220 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:12.763083 sshd[6220]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:12.767384 systemd-logind[1758]: New session 12 of user core. Jan 17 12:19:12.772213 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 12:19:13.282997 sshd[6220]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:13.288054 systemd[1]: sshd@9-10.200.8.43:22-10.200.16.10:45220.service: Deactivated successfully. Jan 17 12:19:13.293399 systemd-logind[1758]: Session 12 logged out. Waiting for processes to exit. Jan 17 12:19:13.294047 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 12:19:13.295430 systemd-logind[1758]: Removed session 12. Jan 17 12:19:13.401371 systemd[1]: Started sshd@10-10.200.8.43:22-10.200.16.10:45224.service - OpenSSH per-connection server daemon (10.200.16.10:45224). Jan 17 12:19:14.061403 sshd[6235]: Accepted publickey for core from 10.200.16.10 port 45224 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:14.063024 sshd[6235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:14.067316 systemd-logind[1758]: New session 13 of user core. Jan 17 12:19:14.075073 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 12:19:14.625086 sshd[6235]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:14.629889 systemd[1]: sshd@10-10.200.8.43:22-10.200.16.10:45224.service: Deactivated successfully. Jan 17 12:19:14.635116 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 12:19:14.636003 systemd-logind[1758]: Session 13 logged out. Waiting for processes to exit. Jan 17 12:19:14.637025 systemd-logind[1758]: Removed session 13. Jan 17 12:19:14.735598 systemd[1]: Started sshd@11-10.200.8.43:22-10.200.16.10:45232.service - OpenSSH per-connection server daemon (10.200.16.10:45232). Jan 17 12:19:15.379860 sshd[6247]: Accepted publickey for core from 10.200.16.10 port 45232 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:15.381658 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:15.386719 systemd-logind[1758]: New session 14 of user core. Jan 17 12:19:15.392057 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 12:19:15.897655 sshd[6247]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:15.900940 systemd[1]: sshd@11-10.200.8.43:22-10.200.16.10:45232.service: Deactivated successfully. Jan 17 12:19:15.906612 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 12:19:15.907713 systemd-logind[1758]: Session 14 logged out. Waiting for processes to exit. Jan 17 12:19:15.908654 systemd-logind[1758]: Removed session 14. Jan 17 12:19:21.010058 systemd[1]: Started sshd@12-10.200.8.43:22-10.200.16.10:39174.service - OpenSSH per-connection server daemon (10.200.16.10:39174). Jan 17 12:19:21.658599 sshd[6269]: Accepted publickey for core from 10.200.16.10 port 39174 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:21.660268 sshd[6269]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:21.665052 systemd-logind[1758]: New session 15 of user core. Jan 17 12:19:21.670211 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 12:19:22.181388 sshd[6269]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:22.185297 systemd[1]: sshd@12-10.200.8.43:22-10.200.16.10:39174.service: Deactivated successfully. Jan 17 12:19:22.192446 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 12:19:22.193418 systemd-logind[1758]: Session 15 logged out. Waiting for processes to exit. Jan 17 12:19:22.194463 systemd-logind[1758]: Removed session 15. Jan 17 12:19:27.298110 systemd[1]: Started sshd@13-10.200.8.43:22-10.200.16.10:46106.service - OpenSSH per-connection server daemon (10.200.16.10:46106). Jan 17 12:19:27.940789 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 46106 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:27.943398 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:27.951969 systemd-logind[1758]: New session 16 of user core. Jan 17 12:19:27.956066 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 12:19:28.474996 sshd[6288]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:28.479907 systemd[1]: sshd@13-10.200.8.43:22-10.200.16.10:46106.service: Deactivated successfully. Jan 17 12:19:28.484649 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 12:19:28.485480 systemd-logind[1758]: Session 16 logged out. Waiting for processes to exit. Jan 17 12:19:28.486578 systemd-logind[1758]: Removed session 16. Jan 17 12:19:33.593435 systemd[1]: Started sshd@14-10.200.8.43:22-10.200.16.10:46114.service - OpenSSH per-connection server daemon (10.200.16.10:46114). Jan 17 12:19:34.259257 sshd[6314]: Accepted publickey for core from 10.200.16.10 port 46114 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:34.261081 sshd[6314]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:34.265392 systemd-logind[1758]: New session 17 of user core. Jan 17 12:19:34.275173 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 12:19:34.791816 sshd[6314]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:34.796198 systemd[1]: sshd@14-10.200.8.43:22-10.200.16.10:46114.service: Deactivated successfully. Jan 17 12:19:34.800960 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 12:19:34.801716 systemd-logind[1758]: Session 17 logged out. Waiting for processes to exit. Jan 17 12:19:34.802664 systemd-logind[1758]: Removed session 17. Jan 17 12:19:39.900453 systemd[1]: Started sshd@15-10.200.8.43:22-10.200.16.10:54806.service - OpenSSH per-connection server daemon (10.200.16.10:54806). Jan 17 12:19:40.549316 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 54806 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:40.551246 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:40.556285 systemd-logind[1758]: New session 18 of user core. Jan 17 12:19:40.560036 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 12:19:41.071964 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:41.077627 systemd[1]: sshd@15-10.200.8.43:22-10.200.16.10:54806.service: Deactivated successfully. Jan 17 12:19:41.081827 systemd-logind[1758]: Session 18 logged out. Waiting for processes to exit. Jan 17 12:19:41.082428 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 12:19:41.083732 systemd-logind[1758]: Removed session 18. Jan 17 12:19:41.187650 systemd[1]: Started sshd@16-10.200.8.43:22-10.200.16.10:54816.service - OpenSSH per-connection server daemon (10.200.16.10:54816). Jan 17 12:19:41.444385 systemd[1]: run-containerd-runc-k8s.io-1aa50a82a1cfc11b5387b3793e5d0c367d4f8c4ef4cc237f79b4c80425e37e07-runc.monhar.mount: Deactivated successfully. Jan 17 12:19:41.856247 sshd[6364]: Accepted publickey for core from 10.200.16.10 port 54816 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:41.857631 sshd[6364]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:41.861995 systemd-logind[1758]: New session 19 of user core. Jan 17 12:19:41.866031 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 12:19:42.705629 sshd[6364]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:42.709743 systemd[1]: sshd@16-10.200.8.43:22-10.200.16.10:54816.service: Deactivated successfully. Jan 17 12:19:42.716408 systemd-logind[1758]: Session 19 logged out. Waiting for processes to exit. Jan 17 12:19:42.717129 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 12:19:42.718931 systemd-logind[1758]: Removed session 19. Jan 17 12:19:42.818036 systemd[1]: Started sshd@17-10.200.8.43:22-10.200.16.10:54832.service - OpenSSH per-connection server daemon (10.200.16.10:54832). Jan 17 12:19:43.463435 sshd[6392]: Accepted publickey for core from 10.200.16.10 port 54832 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:43.465337 sshd[6392]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:43.471063 systemd-logind[1758]: New session 20 of user core. Jan 17 12:19:43.475084 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 12:19:45.828957 sshd[6392]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:45.832389 systemd[1]: sshd@17-10.200.8.43:22-10.200.16.10:54832.service: Deactivated successfully. Jan 17 12:19:45.838317 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 12:19:45.839214 systemd-logind[1758]: Session 20 logged out. Waiting for processes to exit. Jan 17 12:19:45.840274 systemd-logind[1758]: Removed session 20. Jan 17 12:19:45.943097 systemd[1]: Started sshd@18-10.200.8.43:22-10.200.16.10:33736.service - OpenSSH per-connection server daemon (10.200.16.10:33736). Jan 17 12:19:46.588803 sshd[6429]: Accepted publickey for core from 10.200.16.10 port 33736 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:46.590376 sshd[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:46.594815 systemd-logind[1758]: New session 21 of user core. Jan 17 12:19:46.602039 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 12:19:47.210828 sshd[6429]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:47.216753 systemd[1]: sshd@18-10.200.8.43:22-10.200.16.10:33736.service: Deactivated successfully. Jan 17 12:19:47.221226 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 12:19:47.222224 systemd-logind[1758]: Session 21 logged out. Waiting for processes to exit. Jan 17 12:19:47.223348 systemd-logind[1758]: Removed session 21. Jan 17 12:19:47.327298 systemd[1]: Started sshd@19-10.200.8.43:22-10.200.16.10:33740.service - OpenSSH per-connection server daemon (10.200.16.10:33740). Jan 17 12:19:47.991821 sshd[6441]: Accepted publickey for core from 10.200.16.10 port 33740 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:47.995203 sshd[6441]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:48.005018 systemd-logind[1758]: New session 22 of user core. Jan 17 12:19:48.010830 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 12:19:48.524183 sshd[6441]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:48.528649 systemd[1]: sshd@19-10.200.8.43:22-10.200.16.10:33740.service: Deactivated successfully. Jan 17 12:19:48.532880 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 12:19:48.533900 systemd-logind[1758]: Session 22 logged out. Waiting for processes to exit. Jan 17 12:19:48.534943 systemd-logind[1758]: Removed session 22. Jan 17 12:19:53.634340 systemd[1]: Started sshd@20-10.200.8.43:22-10.200.16.10:33754.service - OpenSSH per-connection server daemon (10.200.16.10:33754). Jan 17 12:19:54.278888 sshd[6457]: Accepted publickey for core from 10.200.16.10 port 33754 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:19:54.280675 sshd[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:19:54.285002 systemd-logind[1758]: New session 23 of user core. Jan 17 12:19:54.290415 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 12:19:54.798457 sshd[6457]: pam_unix(sshd:session): session closed for user core Jan 17 12:19:54.802015 systemd[1]: sshd@20-10.200.8.43:22-10.200.16.10:33754.service: Deactivated successfully. Jan 17 12:19:54.808861 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 12:19:54.810130 systemd-logind[1758]: Session 23 logged out. Waiting for processes to exit. Jan 17 12:19:54.811793 systemd-logind[1758]: Removed session 23. Jan 17 12:19:59.911343 systemd[1]: Started sshd@21-10.200.8.43:22-10.200.16.10:55220.service - OpenSSH per-connection server daemon (10.200.16.10:55220). Jan 17 12:20:00.563780 sshd[6474]: Accepted publickey for core from 10.200.16.10 port 55220 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:20:00.565804 sshd[6474]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:00.571231 systemd-logind[1758]: New session 24 of user core. Jan 17 12:20:00.577057 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 12:20:01.082209 sshd[6474]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:01.085690 systemd[1]: sshd@21-10.200.8.43:22-10.200.16.10:55220.service: Deactivated successfully. Jan 17 12:20:01.091091 systemd-logind[1758]: Session 24 logged out. Waiting for processes to exit. Jan 17 12:20:01.092341 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 12:20:01.093511 systemd-logind[1758]: Removed session 24. Jan 17 12:20:06.194526 systemd[1]: Started sshd@22-10.200.8.43:22-10.200.16.10:56118.service - OpenSSH per-connection server daemon (10.200.16.10:56118). Jan 17 12:20:06.847307 sshd[6490]: Accepted publickey for core from 10.200.16.10 port 56118 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:20:06.849430 sshd[6490]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:06.854008 systemd-logind[1758]: New session 25 of user core. Jan 17 12:20:06.859021 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 17 12:20:07.366636 sshd[6490]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:07.371295 systemd[1]: sshd@22-10.200.8.43:22-10.200.16.10:56118.service: Deactivated successfully. Jan 17 12:20:07.376277 systemd[1]: session-25.scope: Deactivated successfully. Jan 17 12:20:07.377158 systemd-logind[1758]: Session 25 logged out. Waiting for processes to exit. Jan 17 12:20:07.378208 systemd-logind[1758]: Removed session 25. Jan 17 12:20:11.443786 systemd[1]: run-containerd-runc-k8s.io-1aa50a82a1cfc11b5387b3793e5d0c367d4f8c4ef4cc237f79b4c80425e37e07-runc.STooEu.mount: Deactivated successfully. Jan 17 12:20:12.480114 systemd[1]: Started sshd@23-10.200.8.43:22-10.200.16.10:56134.service - OpenSSH per-connection server daemon (10.200.16.10:56134). Jan 17 12:20:13.124902 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 56134 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:20:13.126671 sshd[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:13.137254 systemd-logind[1758]: New session 26 of user core. Jan 17 12:20:13.142421 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 17 12:20:13.643158 sshd[6545]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:13.647432 systemd[1]: sshd@23-10.200.8.43:22-10.200.16.10:56134.service: Deactivated successfully. Jan 17 12:20:13.652651 systemd-logind[1758]: Session 26 logged out. Waiting for processes to exit. Jan 17 12:20:13.653682 systemd[1]: session-26.scope: Deactivated successfully. Jan 17 12:20:13.655958 systemd-logind[1758]: Removed session 26. Jan 17 12:20:18.756301 systemd[1]: Started sshd@24-10.200.8.43:22-10.200.16.10:50502.service - OpenSSH per-connection server daemon (10.200.16.10:50502). Jan 17 12:20:19.404305 sshd[6563]: Accepted publickey for core from 10.200.16.10 port 50502 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:20:19.406338 sshd[6563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:19.411131 systemd-logind[1758]: New session 27 of user core. Jan 17 12:20:19.417048 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 17 12:20:19.939090 sshd[6563]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:19.942286 systemd[1]: sshd@24-10.200.8.43:22-10.200.16.10:50502.service: Deactivated successfully. Jan 17 12:20:19.947846 systemd[1]: session-27.scope: Deactivated successfully. Jan 17 12:20:19.949110 systemd-logind[1758]: Session 27 logged out. Waiting for processes to exit. Jan 17 12:20:19.950253 systemd-logind[1758]: Removed session 27. Jan 17 12:20:25.056179 systemd[1]: Started sshd@25-10.200.8.43:22-10.200.16.10:50506.service - OpenSSH per-connection server daemon (10.200.16.10:50506). Jan 17 12:20:25.715360 sshd[6577]: Accepted publickey for core from 10.200.16.10 port 50506 ssh2: RSA SHA256:jFiVq2dNDRUAC8ROX0TLnIcZ39MxKwqEp5xJEl6fen8 Jan 17 12:20:25.717355 sshd[6577]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 12:20:25.722778 systemd-logind[1758]: New session 28 of user core. Jan 17 12:20:25.729140 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 17 12:20:26.245811 sshd[6577]: pam_unix(sshd:session): session closed for user core Jan 17 12:20:26.250348 systemd[1]: sshd@25-10.200.8.43:22-10.200.16.10:50506.service: Deactivated successfully. Jan 17 12:20:26.254943 systemd[1]: session-28.scope: Deactivated successfully. Jan 17 12:20:26.255917 systemd-logind[1758]: Session 28 logged out. Waiting for processes to exit. Jan 17 12:20:26.257051 systemd-logind[1758]: Removed session 28.