Jan 29 12:01:32.134596 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:01:32.134636 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:32.134651 kernel: BIOS-provided physical RAM map: Jan 29 12:01:32.134662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 12:01:32.134673 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 12:01:32.134684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 12:01:32.134697 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 29 12:01:32.134711 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 29 12:01:32.134722 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 12:01:32.134733 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 12:01:32.134745 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 12:01:32.134755 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 12:01:32.134766 kernel: printk: bootconsole [earlyser0] enabled Jan 29 12:01:32.134777 kernel: NX (Execute Disable) protection: active Jan 29 12:01:32.134794 kernel: APIC: Static calls initialized Jan 29 12:01:32.134806 kernel: efi: EFI v2.7 by Microsoft Jan 29 12:01:32.134819 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jan 29 12:01:32.134830 kernel: SMBIOS 3.1.0 present. Jan 29 12:01:32.134842 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 12:01:32.134854 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 12:01:32.134867 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 12:01:32.134879 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 12:01:32.134890 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 12:01:32.134903 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 12:01:32.134917 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 12:01:32.134929 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:32.134941 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:32.134954 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 12:01:32.134966 kernel: tsc: Detected 2593.908 MHz processor Jan 29 12:01:32.134979 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:01:32.134991 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:01:32.135003 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 12:01:32.135015 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 12:01:32.135031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:01:32.135042 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 12:01:32.135055 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 12:01:32.135067 kernel: Using GB pages for direct mapping Jan 29 12:01:32.135079 kernel: Secure boot disabled Jan 29 12:01:32.135091 kernel: ACPI: Early table checksum verification disabled Jan 29 12:01:32.135103 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 12:01:32.135121 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135137 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135150 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 12:01:32.135163 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 12:01:32.135176 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135200 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135213 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135230 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135243 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135257 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135270 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135283 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 12:01:32.135296 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 12:01:32.135309 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 12:01:32.135322 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 12:01:32.135338 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 12:01:32.135351 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 12:01:32.135363 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 12:01:32.135376 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 12:01:32.135389 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 12:01:32.135402 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 12:01:32.135415 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:01:32.135428 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:01:32.135441 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 12:01:32.135456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 12:01:32.135468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 12:01:32.135489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 12:01:32.135501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 12:01:32.135515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 12:01:32.135529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 12:01:32.135545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 12:01:32.135558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 12:01:32.135572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 12:01:32.135589 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 12:01:32.135603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 12:01:32.135617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 12:01:32.135631 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 12:01:32.135645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 12:01:32.135659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 12:01:32.135673 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 12:01:32.135687 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 12:01:32.135701 kernel: Zone ranges: Jan 29 12:01:32.135717 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:01:32.135731 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:01:32.135745 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:32.135759 kernel: Movable zone start for each node Jan 29 12:01:32.135773 kernel: Early memory node ranges Jan 29 12:01:32.135787 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 12:01:32.135801 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 12:01:32.135814 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 12:01:32.135828 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:32.135844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 12:01:32.135858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:01:32.135872 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 12:01:32.135886 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 12:01:32.135900 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 12:01:32.135914 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 12:01:32.135927 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:01:32.135941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:01:32.135955 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:01:32.135972 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 12:01:32.135986 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:01:32.136000 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 12:01:32.136014 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 12:01:32.136028 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:01:32.136042 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:01:32.136056 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:01:32.136070 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:01:32.136083 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:01:32.136099 kernel: Hyper-V: PV spinlocks enabled Jan 29 12:01:32.136113 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:01:32.136128 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:32.136143 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:01:32.136156 kernel: random: crng init done Jan 29 12:01:32.136170 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 12:01:32.136184 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:01:32.138240 kernel: Fallback order for Node 0: 0 Jan 29 12:01:32.138263 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 12:01:32.138287 kernel: Policy zone: Normal Jan 29 12:01:32.138305 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:01:32.138320 kernel: software IO TLB: area num 2. Jan 29 12:01:32.138335 kernel: Memory: 8069612K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 317588K reserved, 0K cma-reserved) Jan 29 12:01:32.138350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:01:32.138365 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:01:32.138380 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:01:32.138395 kernel: Dynamic Preempt: voluntary Jan 29 12:01:32.138409 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:01:32.138425 kernel: rcu: RCU event tracing is enabled. Jan 29 12:01:32.138444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:01:32.138459 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:01:32.138474 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:01:32.138489 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:01:32.138504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:01:32.138522 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:01:32.138537 kernel: Using NULL legacy PIC Jan 29 12:01:32.138552 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 12:01:32.138566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:01:32.138582 kernel: Console: colour dummy device 80x25 Jan 29 12:01:32.138597 kernel: printk: console [tty1] enabled Jan 29 12:01:32.138611 kernel: printk: console [ttyS0] enabled Jan 29 12:01:32.138625 kernel: printk: bootconsole [earlyser0] disabled Jan 29 12:01:32.138640 kernel: ACPI: Core revision 20230628 Jan 29 12:01:32.138655 kernel: Failed to register legacy timer interrupt Jan 29 12:01:32.138673 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:01:32.138687 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 12:01:32.138702 kernel: Hyper-V: Using IPI hypercalls Jan 29 12:01:32.138716 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 12:01:32.138731 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 12:01:32.138747 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 12:01:32.138762 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 12:01:32.138777 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 12:01:32.138792 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 12:01:32.138810 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Jan 29 12:01:32.138825 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:01:32.138840 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:01:32.138855 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:01:32.138870 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:01:32.138885 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:01:32.138900 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:01:32.138915 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:01:32.138930 kernel: RETBleed: Vulnerable Jan 29 12:01:32.138947 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:01:32.138961 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:32.138976 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:32.138991 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:01:32.139006 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:01:32.139020 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:01:32.139034 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:01:32.139049 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:01:32.139064 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:01:32.139079 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:01:32.139094 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:01:32.139111 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 12:01:32.139126 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 12:01:32.139140 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 12:01:32.139155 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 12:01:32.139170 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:01:32.139184 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:01:32.142237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:01:32.142251 kernel: landlock: Up and running. Jan 29 12:01:32.142260 kernel: SELinux: Initializing. Jan 29 12:01:32.142280 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.142295 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.142309 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:01:32.142330 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:32.142345 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:32.142360 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:32.142375 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:01:32.142390 kernel: signal: max sigframe size: 3632 Jan 29 12:01:32.142405 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:01:32.142420 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:01:32.142435 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:01:32.142450 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:01:32.142468 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:01:32.142482 kernel: .... node #0, CPUs: #1 Jan 29 12:01:32.142498 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 12:01:32.142515 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:01:32.142531 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:01:32.142546 kernel: smpboot: Max logical packages: 1 Jan 29 12:01:32.142561 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 29 12:01:32.142576 kernel: devtmpfs: initialized Jan 29 12:01:32.142594 kernel: x86/mm: Memory block size: 128MB Jan 29 12:01:32.142609 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 12:01:32.142624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:01:32.142638 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:01:32.142653 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:01:32.142668 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:01:32.142683 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:01:32.142698 kernel: audit: type=2000 audit(1738152090.028:1): state=initialized audit_enabled=0 res=1 Jan 29 12:01:32.142712 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:01:32.142730 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:01:32.142745 kernel: cpuidle: using governor menu Jan 29 12:01:32.142760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:01:32.142774 kernel: dca service started, version 1.12.1 Jan 29 12:01:32.142790 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 12:01:32.142804 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:01:32.142819 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:01:32.142835 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:01:32.142850 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:01:32.142868 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:01:32.142883 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:01:32.142899 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:01:32.142913 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:01:32.142929 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:01:32.142944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:01:32.142959 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:01:32.142973 kernel: ACPI: Interpreter enabled Jan 29 12:01:32.142988 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:01:32.143006 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:01:32.143020 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:01:32.143035 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 12:01:32.143050 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 12:01:32.143065 kernel: iommu: Default domain type: Translated Jan 29 12:01:32.143080 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:01:32.143095 kernel: efivars: Registered efivars operations Jan 29 12:01:32.143110 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:01:32.143124 kernel: PCI: System does not support PCI Jan 29 12:01:32.143141 kernel: vgaarb: loaded Jan 29 12:01:32.143156 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 12:01:32.143171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:01:32.148036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:01:32.148070 kernel: pnp: PnP ACPI init Jan 29 12:01:32.148086 kernel: pnp: PnP ACPI: found 3 devices Jan 29 12:01:32.148101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:01:32.148117 kernel: NET: Registered PF_INET protocol family Jan 29 12:01:32.148131 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:01:32.148152 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 12:01:32.148166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:01:32.148181 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:01:32.148206 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:01:32.148220 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 12:01:32.148234 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.148248 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.148275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:01:32.148290 kernel: NET: Registered PF_XDP protocol family Jan 29 12:01:32.148308 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:01:32.148320 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:01:32.148332 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jan 29 12:01:32.148347 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:01:32.148361 kernel: Initialise system trusted keyrings Jan 29 12:01:32.148375 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 12:01:32.148390 kernel: Key type asymmetric registered Jan 29 12:01:32.148405 kernel: Asymmetric key parser 'x509' registered Jan 29 12:01:32.148418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:01:32.148435 kernel: io scheduler mq-deadline registered Jan 29 12:01:32.148449 kernel: io scheduler kyber registered Jan 29 12:01:32.148463 kernel: io scheduler bfq registered Jan 29 12:01:32.148478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:01:32.148493 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:01:32.148506 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:01:32.148519 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:01:32.148532 kernel: i8042: PNP: No PS/2 controller found. Jan 29 12:01:32.148733 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 12:01:32.148874 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T12:01:31 UTC (1738152091) Jan 29 12:01:32.148999 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 12:01:32.149017 kernel: intel_pstate: CPU model not supported Jan 29 12:01:32.149032 kernel: efifb: probing for efifb Jan 29 12:01:32.149046 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 12:01:32.149061 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 12:01:32.149077 kernel: efifb: scrolling: redraw Jan 29 12:01:32.149099 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 12:01:32.149115 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:01:32.149131 kernel: fb0: EFI VGA frame buffer device Jan 29 12:01:32.149145 kernel: pstore: Using crash dump compression: deflate Jan 29 12:01:32.149159 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 12:01:32.149175 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:01:32.149203 kernel: Segment Routing with IPv6 Jan 29 12:01:32.149220 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:01:32.149236 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:01:32.149250 kernel: Key type dns_resolver registered Jan 29 12:01:32.149269 kernel: IPI shorthand broadcast: enabled Jan 29 12:01:32.149285 kernel: sched_clock: Marking stable (924004500, 49208500)->(1232265200, -259052200) Jan 29 12:01:32.149300 kernel: registered taskstats version 1 Jan 29 12:01:32.149316 kernel: Loading compiled-in X.509 certificates Jan 29 12:01:32.149331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:01:32.149346 kernel: Key type .fscrypt registered Jan 29 12:01:32.149363 kernel: Key type fscrypt-provisioning registered Jan 29 12:01:32.149378 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:01:32.149397 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:01:32.149412 kernel: ima: No architecture policies found Jan 29 12:01:32.149428 kernel: clk: Disabling unused clocks Jan 29 12:01:32.149445 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:01:32.149461 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:01:32.149477 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:01:32.149494 kernel: Run /init as init process Jan 29 12:01:32.149508 kernel: with arguments: Jan 29 12:01:32.149523 kernel: /init Jan 29 12:01:32.149541 kernel: with environment: Jan 29 12:01:32.149555 kernel: HOME=/ Jan 29 12:01:32.149570 kernel: TERM=linux Jan 29 12:01:32.149585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:01:32.149603 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:32.149622 systemd[1]: Detected virtualization microsoft. Jan 29 12:01:32.149638 systemd[1]: Detected architecture x86-64. Jan 29 12:01:32.149654 systemd[1]: Running in initrd. Jan 29 12:01:32.149672 systemd[1]: No hostname configured, using default hostname. Jan 29 12:01:32.149688 systemd[1]: Hostname set to . Jan 29 12:01:32.149704 systemd[1]: Initializing machine ID from random generator. Jan 29 12:01:32.149720 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:01:32.149736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:32.149752 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:32.149770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:01:32.149786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:32.149805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:01:32.149821 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:01:32.149840 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:01:32.149856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:01:32.149873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:32.149890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:32.149906 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:32.149925 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:32.149941 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:32.149957 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:32.149973 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:32.149989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:32.150005 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:01:32.150021 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:01:32.150038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:32.150054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:32.150074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:32.150090 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:32.150106 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:01:32.150122 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:32.150138 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:01:32.150155 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:01:32.150170 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:32.150202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:32.150222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:32.150269 systemd-journald[176]: Collecting audit messages is disabled. Jan 29 12:01:32.150306 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:32.150323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:32.150342 systemd-journald[176]: Journal started Jan 29 12:01:32.150393 systemd-journald[176]: Runtime Journal (/run/log/journal/ca6cb3618f9940b1a51555a8607d6b39) is 8.0M, max 158.8M, 150.8M free. Jan 29 12:01:32.132782 systemd-modules-load[177]: Inserted module 'overlay' Jan 29 12:01:32.155201 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:32.155899 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:01:32.169542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:32.178459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:32.183806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:32.185467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:32.195360 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:32.210365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:32.225210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:01:32.236341 kernel: Bridge firewalling registered Jan 29 12:01:32.237499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:32.243840 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 29 12:01:32.245051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:32.252039 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:32.259343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:32.272504 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:01:32.281387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:32.289771 dracut-cmdline[206]: dracut-dracut-053 Jan 29 12:01:32.294037 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:32.311516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:32.326510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:32.374456 systemd-resolved[246]: Positive Trust Anchors: Jan 29 12:01:32.376938 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:32.376998 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:32.400331 kernel: SCSI subsystem initialized Jan 29 12:01:32.400481 systemd-resolved[246]: Defaulting to hostname 'linux'. Jan 29 12:01:32.403961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:32.409608 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:32.419206 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:01:32.430213 kernel: iscsi: registered transport (tcp) Jan 29 12:01:32.452023 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:01:32.452121 kernel: QLogic iSCSI HBA Driver Jan 29 12:01:32.487989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:32.496368 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:01:32.525160 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:01:32.525271 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:01:32.528398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:01:32.569223 kernel: raid6: avx512x4 gen() 18205 MB/s Jan 29 12:01:32.588210 kernel: raid6: avx512x2 gen() 18508 MB/s Jan 29 12:01:32.607203 kernel: raid6: avx512x1 gen() 18429 MB/s Jan 29 12:01:32.626203 kernel: raid6: avx2x4 gen() 18446 MB/s Jan 29 12:01:32.646203 kernel: raid6: avx2x2 gen() 18493 MB/s Jan 29 12:01:32.666278 kernel: raid6: avx2x1 gen() 14005 MB/s Jan 29 12:01:32.666321 kernel: raid6: using algorithm avx512x2 gen() 18508 MB/s Jan 29 12:01:32.687248 kernel: raid6: .... xor() 30369 MB/s, rmw enabled Jan 29 12:01:32.687295 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:01:32.710234 kernel: xor: automatically using best checksumming function avx Jan 29 12:01:32.862221 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:01:32.871720 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:32.877482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:32.896812 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 29 12:01:32.903256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:32.923366 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:01:32.936266 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 29 12:01:32.964389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:32.972432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:33.012819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:33.026370 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:01:33.066499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:33.079459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:33.087625 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:01:33.091487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:33.094852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:33.110362 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:01:33.128207 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:01:33.131204 kernel: AES CTR mode by8 optimization enabled Jan 29 12:01:33.134207 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 12:01:33.147214 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:33.155416 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:33.155602 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:33.164347 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:33.170485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:33.178899 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 12:01:33.178925 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 12:01:33.178937 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 12:01:33.174979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:33.198631 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 12:01:33.197652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:33.209908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:33.225217 kernel: PTP clock support registered Jan 29 12:01:33.236205 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 12:01:33.236260 kernel: hv_vmbus: registering driver hv_utils Jan 29 12:01:33.241308 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 12:01:33.241349 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 12:01:33.993775 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 12:01:33.993398 systemd-resolved[246]: Clock change detected. Flushing caches. Jan 29 12:01:34.009675 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 12:01:34.011804 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 12:01:34.017464 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:01:34.017505 kernel: scsi host0: storvsc_host_t Jan 29 12:01:34.024012 kernel: scsi host1: storvsc_host_t Jan 29 12:01:34.028994 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 12:01:34.030294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:34.035337 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 12:01:34.045857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:34.058670 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 12:01:34.069014 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 12:01:34.069080 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 12:01:34.082356 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 12:01:34.089124 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:01:34.089161 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 12:01:34.093956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:34.108802 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 12:01:34.123364 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 12:01:34.123576 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 12:01:34.123756 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 12:01:34.123926 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 12:01:34.124153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:34.124174 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 12:01:34.199323 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: VF slot 1 added Jan 29 12:01:34.207996 kernel: hv_vmbus: registering driver hv_pci Jan 29 12:01:34.213427 kernel: hv_pci 0c1e5017-2d10-4a3d-a2aa-5612ac24e76d: PCI VMBus probing: Using version 0x10004 Jan 29 12:01:34.256708 kernel: hv_pci 0c1e5017-2d10-4a3d-a2aa-5612ac24e76d: PCI host bridge to bus 2d10:00 Jan 29 12:01:34.257273 kernel: pci_bus 2d10:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 12:01:34.257447 kernel: pci_bus 2d10:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 12:01:34.257608 kernel: pci 2d10:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 12:01:34.257795 kernel: pci 2d10:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:34.258046 kernel: pci 2d10:00:02.0: enabling Extended Tags Jan 29 12:01:34.258171 kernel: pci 2d10:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2d10:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 12:01:34.258276 kernel: pci_bus 2d10:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 12:01:34.258370 kernel: pci 2d10:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:34.429611 kernel: mlx5_core 2d10:00:02.0: enabling device (0000 -> 0002) Jan 29 12:01:34.656609 kernel: mlx5_core 2d10:00:02.0: firmware version: 14.30.5000 Jan 29 12:01:34.656847 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: VF registering: eth1 Jan 29 12:01:34.657453 kernel: mlx5_core 2d10:00:02.0 eth1: joined to eth0 Jan 29 12:01:34.657675 kernel: mlx5_core 2d10:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:01:34.669003 kernel: mlx5_core 2d10:00:02.0 enP11536s1: renamed from eth1 Jan 29 12:01:34.669660 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 12:01:34.714070 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Jan 29 12:01:34.725888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 12:01:34.741008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 12:01:34.776007 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (458) Jan 29 12:01:34.791216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 12:01:34.794759 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 12:01:34.813282 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:01:35.840021 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:35.840565 disk-uuid[596]: The operation has completed successfully. Jan 29 12:01:35.916901 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:01:35.917038 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:01:35.943221 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:01:35.950647 sh[712]: Success Jan 29 12:01:35.982022 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:01:36.169337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:01:36.181891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:01:36.186560 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:01:36.202927 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:01:36.202992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:36.206299 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:01:36.208972 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:01:36.211325 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:01:36.760454 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:01:36.764086 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:01:36.776233 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:01:36.782707 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:01:36.800353 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:36.800403 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:36.800417 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:36.824870 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:36.838837 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:01:36.841597 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:36.848965 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:01:36.860242 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:01:36.878477 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:36.888143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:36.910636 systemd-networkd[896]: lo: Link UP Jan 29 12:01:36.910647 systemd-networkd[896]: lo: Gained carrier Jan 29 12:01:36.912871 systemd-networkd[896]: Enumeration completed Jan 29 12:01:36.912971 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:36.916515 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:36.916519 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:36.919777 systemd[1]: Reached target network.target - Network. Jan 29 12:01:36.977004 kernel: mlx5_core 2d10:00:02.0 enP11536s1: Link up Jan 29 12:01:37.017023 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: Data path switched to VF: enP11536s1 Jan 29 12:01:37.017902 systemd-networkd[896]: enP11536s1: Link UP Jan 29 12:01:37.018109 systemd-networkd[896]: eth0: Link UP Jan 29 12:01:37.018380 systemd-networkd[896]: eth0: Gained carrier Jan 29 12:01:37.018399 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:37.023269 systemd-networkd[896]: enP11536s1: Gained carrier Jan 29 12:01:37.056066 systemd-networkd[896]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:37.707916 ignition[869]: Ignition 2.19.0 Jan 29 12:01:37.707927 ignition[869]: Stage: fetch-offline Jan 29 12:01:37.707986 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.708000 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.708122 ignition[869]: parsed url from cmdline: "" Jan 29 12:01:37.708128 ignition[869]: no config URL provided Jan 29 12:01:37.708135 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:37.718364 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:37.708145 ignition[869]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:37.708152 ignition[869]: failed to fetch config: resource requires networking Jan 29 12:01:37.716345 ignition[869]: Ignition finished successfully Jan 29 12:01:37.734191 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:01:37.750962 ignition[904]: Ignition 2.19.0 Jan 29 12:01:37.750974 ignition[904]: Stage: fetch Jan 29 12:01:37.751203 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.751217 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.751318 ignition[904]: parsed url from cmdline: "" Jan 29 12:01:37.751321 ignition[904]: no config URL provided Jan 29 12:01:37.751326 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:37.751333 ignition[904]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:37.751356 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 12:01:37.832094 ignition[904]: GET result: OK Jan 29 12:01:37.832237 ignition[904]: config has been read from IMDS userdata Jan 29 12:01:37.832280 ignition[904]: parsing config with SHA512: c81662c9d4eeb8a4d3e625485d0e62ad6dadfc15b57e47a4624f8785b18b1252136b50b31a9c28b2aeaee36f6119e50bec3e963d81abc1127c4b7442754745e9 Jan 29 12:01:37.839829 unknown[904]: fetched base config from "system" Jan 29 12:01:37.839850 unknown[904]: fetched base config from "system" Jan 29 12:01:37.840648 ignition[904]: fetch: fetch complete Jan 29 12:01:37.839861 unknown[904]: fetched user config from "azure" Jan 29 12:01:37.840656 ignition[904]: fetch: fetch passed Jan 29 12:01:37.842633 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:01:37.840717 ignition[904]: Ignition finished successfully Jan 29 12:01:37.861180 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:01:37.876732 ignition[910]: Ignition 2.19.0 Jan 29 12:01:37.876742 ignition[910]: Stage: kargs Jan 29 12:01:37.876974 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.877012 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.877858 ignition[910]: kargs: kargs passed Jan 29 12:01:37.877901 ignition[910]: Ignition finished successfully Jan 29 12:01:37.888213 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:01:37.897149 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:01:37.913166 ignition[916]: Ignition 2.19.0 Jan 29 12:01:37.913177 ignition[916]: Stage: disks Jan 29 12:01:37.915684 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:01:37.913420 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.919507 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:37.913430 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.924322 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:01:37.914281 ignition[916]: disks: disks passed Jan 29 12:01:37.927370 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:37.914319 ignition[916]: Ignition finished successfully Jan 29 12:01:37.934399 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:37.950158 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:37.958151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:01:38.013499 systemd-fsck[924]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 12:01:38.019001 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:01:38.031630 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:01:38.123001 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:01:38.123404 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:01:38.125476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:38.159159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:38.164577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:01:38.168950 systemd-networkd[896]: enP11536s1: Gained IPv6LL Jan 29 12:01:38.176010 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (935) Jan 29 12:01:38.177172 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:01:38.184297 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:38.185526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:01:38.197594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:38.197621 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:38.197634 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:38.192762 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:38.203521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:38.207823 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:01:38.217151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:01:38.843762 coreos-metadata[937]: Jan 29 12:01:38.843 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 12:01:38.848591 coreos-metadata[937]: Jan 29 12:01:38.846 INFO Fetch successful Jan 29 12:01:38.848591 coreos-metadata[937]: Jan 29 12:01:38.846 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 12:01:38.859870 coreos-metadata[937]: Jan 29 12:01:38.859 INFO Fetch successful Jan 29 12:01:38.872186 systemd-networkd[896]: eth0: Gained IPv6LL Jan 29 12:01:38.876123 coreos-metadata[937]: Jan 29 12:01:38.876 INFO wrote hostname ci-4081.3.0-a-76e05e3785 to /sysroot/etc/hostname Jan 29 12:01:38.878057 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:38.946941 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:01:38.978902 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:01:39.013369 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:01:39.036015 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:01:39.939177 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:39.951090 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:01:39.960197 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:01:39.966145 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:01:39.973516 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:39.992783 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:01:40.005556 ignition[1054]: INFO : Ignition 2.19.0 Jan 29 12:01:40.005556 ignition[1054]: INFO : Stage: mount Jan 29 12:01:40.012305 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:40.012305 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:40.012305 ignition[1054]: INFO : mount: mount passed Jan 29 12:01:40.012305 ignition[1054]: INFO : Ignition finished successfully Jan 29 12:01:40.007670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:01:40.023178 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:01:40.041172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:40.056537 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1065) Jan 29 12:01:40.056610 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:40.057996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:40.062564 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:40.070131 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:40.072123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:40.098245 ignition[1081]: INFO : Ignition 2.19.0 Jan 29 12:01:40.098245 ignition[1081]: INFO : Stage: files Jan 29 12:01:40.102631 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:40.102631 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:40.102631 ignition[1081]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:01:40.125241 ignition[1081]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:01:40.125241 ignition[1081]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:01:40.212184 ignition[1081]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:01:40.217193 ignition[1081]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:01:40.217193 ignition[1081]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:01:40.212675 unknown[1081]: wrote ssh authorized keys file for user: core Jan 29 12:01:40.258256 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:40.263482 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:01:40.308498 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:01:40.430242 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:01:40.944157 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:01:41.302176 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:41.302176 ignition[1081]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:01:41.317457 ignition[1081]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:41.322751 ignition[1081]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:41.322751 ignition[1081]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:01:41.330767 ignition[1081]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:41.334667 ignition[1081]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:41.339105 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:41.343416 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:41.348296 ignition[1081]: INFO : files: files passed Jan 29 12:01:41.348296 ignition[1081]: INFO : Ignition finished successfully Jan 29 12:01:41.354532 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:01:41.361192 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:01:41.367218 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:01:41.383223 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:01:41.383362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:01:41.410501 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:41.410501 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:41.419058 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:41.423942 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:41.425300 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:01:41.437254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:01:41.463336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:01:41.463455 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:01:41.473188 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:01:41.475813 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:01:41.481070 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:01:41.493218 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:01:41.507495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:41.517182 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:01:41.529413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:41.535302 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:41.538521 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:01:41.543637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:01:41.543777 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:41.549315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:01:41.553543 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:01:41.560955 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:01:41.565779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:41.571170 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:41.576693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:01:41.581886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:41.587503 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:01:41.592864 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:01:41.599894 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:01:41.604054 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:01:41.604209 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:41.610571 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:41.610968 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:41.611349 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:01:41.618025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:41.623013 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:01:41.623163 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:41.628951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:01:41.629119 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:41.633959 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:01:41.634120 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:01:41.639709 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:01:41.639843 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:41.658609 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:01:41.675154 ignition[1134]: INFO : Ignition 2.19.0 Jan 29 12:01:41.678098 ignition[1134]: INFO : Stage: umount Jan 29 12:01:41.678098 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:41.678098 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:41.678098 ignition[1134]: INFO : umount: umount passed Jan 29 12:01:41.678098 ignition[1134]: INFO : Ignition finished successfully Jan 29 12:01:41.677887 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:01:41.682059 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:01:41.682235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:41.690631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:01:41.690771 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:41.704327 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:01:41.707294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:01:41.720376 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:01:41.720484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:01:41.726448 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:01:41.726547 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:01:41.731653 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:01:41.731703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:01:41.732564 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:01:41.732600 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:01:41.733369 systemd[1]: Stopped target network.target - Network. Jan 29 12:01:41.740613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:01:41.740671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:41.743631 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:01:41.746162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:01:41.746212 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:41.751722 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:01:41.756236 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:01:41.760576 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:01:41.763022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:41.767727 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:01:41.767778 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:41.772426 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:01:41.772497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:01:41.777839 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:01:41.780668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:41.788190 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:01:41.794340 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:01:41.799925 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:01:41.802044 systemd-networkd[896]: eth0: DHCPv6 lease lost Jan 29 12:01:41.804581 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:01:41.804687 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:01:41.808881 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:01:41.808998 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:01:41.816801 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:01:41.816868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:41.832410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:01:41.839238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:01:41.839328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:41.842634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:01:41.842679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:41.847357 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:01:41.847402 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:41.852348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:01:41.852394 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:41.863877 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:41.892586 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:01:41.892736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:41.901920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:01:41.902017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:41.912250 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:01:41.914627 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:41.924380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:01:41.924454 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:41.929931 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:01:41.930030 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:41.934512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:41.934563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:41.953008 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: Data path switched from VF: enP11536s1 Jan 29 12:01:41.957213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:01:41.960059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:01:41.960131 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:41.963442 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:01:41.963499 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:41.969835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:01:41.969891 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:41.972967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:41.973354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:41.976996 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:01:41.977523 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:01:41.990564 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:01:41.990688 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:01:42.223448 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:01:42.223612 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:01:42.228406 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:01:42.233349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:01:42.233435 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:42.247234 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:01:42.297662 systemd[1]: Switching root. Jan 29 12:01:42.369705 systemd-journald[176]: Journal stopped Jan 29 12:01:32.134596 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 29 12:01:32.134636 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:32.134651 kernel: BIOS-provided physical RAM map: Jan 29 12:01:32.134662 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 29 12:01:32.134673 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 29 12:01:32.134684 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 29 12:01:32.134697 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 29 12:01:32.134711 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 29 12:01:32.134722 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 29 12:01:32.134733 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 29 12:01:32.134745 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 29 12:01:32.134755 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 29 12:01:32.134766 kernel: printk: bootconsole [earlyser0] enabled Jan 29 12:01:32.134777 kernel: NX (Execute Disable) protection: active Jan 29 12:01:32.134794 kernel: APIC: Static calls initialized Jan 29 12:01:32.134806 kernel: efi: EFI v2.7 by Microsoft Jan 29 12:01:32.134819 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Jan 29 12:01:32.134830 kernel: SMBIOS 3.1.0 present. Jan 29 12:01:32.134842 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 29 12:01:32.134854 kernel: Hypervisor detected: Microsoft Hyper-V Jan 29 12:01:32.134867 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 29 12:01:32.134879 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 29 12:01:32.134890 kernel: Hyper-V: Nested features: 0x1e0101 Jan 29 12:01:32.134903 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 29 12:01:32.134917 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 29 12:01:32.134929 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:32.134941 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 29 12:01:32.134954 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 29 12:01:32.134966 kernel: tsc: Detected 2593.908 MHz processor Jan 29 12:01:32.134979 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 12:01:32.134991 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 12:01:32.135003 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 29 12:01:32.135015 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 29 12:01:32.135031 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 12:01:32.135042 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 29 12:01:32.135055 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 29 12:01:32.135067 kernel: Using GB pages for direct mapping Jan 29 12:01:32.135079 kernel: Secure boot disabled Jan 29 12:01:32.135091 kernel: ACPI: Early table checksum verification disabled Jan 29 12:01:32.135103 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 29 12:01:32.135121 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135137 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135150 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 29 12:01:32.135163 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 29 12:01:32.135176 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135200 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135213 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135230 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135243 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135257 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135270 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 29 12:01:32.135283 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 29 12:01:32.135296 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 29 12:01:32.135309 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 29 12:01:32.135322 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 29 12:01:32.135338 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 29 12:01:32.135351 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 29 12:01:32.135363 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 29 12:01:32.135376 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 29 12:01:32.135389 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 29 12:01:32.135402 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 29 12:01:32.135415 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 12:01:32.135428 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 12:01:32.135441 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 29 12:01:32.135456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 29 12:01:32.135468 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 29 12:01:32.135489 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 29 12:01:32.135501 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 29 12:01:32.135515 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 29 12:01:32.135529 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 29 12:01:32.135545 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 29 12:01:32.135558 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 29 12:01:32.135572 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 29 12:01:32.135589 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 29 12:01:32.135603 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 29 12:01:32.135617 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 29 12:01:32.135631 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 29 12:01:32.135645 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 29 12:01:32.135659 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 29 12:01:32.135673 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 29 12:01:32.135687 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 29 12:01:32.135701 kernel: Zone ranges: Jan 29 12:01:32.135717 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 12:01:32.135731 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 29 12:01:32.135745 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:32.135759 kernel: Movable zone start for each node Jan 29 12:01:32.135773 kernel: Early memory node ranges Jan 29 12:01:32.135787 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 29 12:01:32.135801 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 29 12:01:32.135814 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 29 12:01:32.135828 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 29 12:01:32.135844 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 29 12:01:32.135858 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 12:01:32.135872 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 29 12:01:32.135886 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 29 12:01:32.135900 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 29 12:01:32.135914 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 29 12:01:32.135927 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 29 12:01:32.135941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 12:01:32.135955 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 12:01:32.135972 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 29 12:01:32.135986 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 12:01:32.136000 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 29 12:01:32.136014 kernel: Booting paravirtualized kernel on Hyper-V Jan 29 12:01:32.136028 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 12:01:32.136042 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 12:01:32.136056 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 12:01:32.136070 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 12:01:32.136083 kernel: pcpu-alloc: [0] 0 1 Jan 29 12:01:32.136099 kernel: Hyper-V: PV spinlocks enabled Jan 29 12:01:32.136113 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 29 12:01:32.136128 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:32.136143 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 12:01:32.136156 kernel: random: crng init done Jan 29 12:01:32.136170 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 29 12:01:32.136184 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 29 12:01:32.138240 kernel: Fallback order for Node 0: 0 Jan 29 12:01:32.138263 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 29 12:01:32.138287 kernel: Policy zone: Normal Jan 29 12:01:32.138305 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 12:01:32.138320 kernel: software IO TLB: area num 2. Jan 29 12:01:32.138335 kernel: Memory: 8069612K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 317588K reserved, 0K cma-reserved) Jan 29 12:01:32.138350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 12:01:32.138365 kernel: ftrace: allocating 37921 entries in 149 pages Jan 29 12:01:32.138380 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 12:01:32.138395 kernel: Dynamic Preempt: voluntary Jan 29 12:01:32.138409 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 12:01:32.138425 kernel: rcu: RCU event tracing is enabled. Jan 29 12:01:32.138444 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 12:01:32.138459 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 12:01:32.138474 kernel: Rude variant of Tasks RCU enabled. Jan 29 12:01:32.138489 kernel: Tracing variant of Tasks RCU enabled. Jan 29 12:01:32.138504 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 12:01:32.138522 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 12:01:32.138537 kernel: Using NULL legacy PIC Jan 29 12:01:32.138552 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 29 12:01:32.138566 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 12:01:32.138582 kernel: Console: colour dummy device 80x25 Jan 29 12:01:32.138597 kernel: printk: console [tty1] enabled Jan 29 12:01:32.138611 kernel: printk: console [ttyS0] enabled Jan 29 12:01:32.138625 kernel: printk: bootconsole [earlyser0] disabled Jan 29 12:01:32.138640 kernel: ACPI: Core revision 20230628 Jan 29 12:01:32.138655 kernel: Failed to register legacy timer interrupt Jan 29 12:01:32.138673 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 12:01:32.138687 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 29 12:01:32.138702 kernel: Hyper-V: Using IPI hypercalls Jan 29 12:01:32.138716 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 29 12:01:32.138731 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 29 12:01:32.138747 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 29 12:01:32.138762 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 29 12:01:32.138777 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 29 12:01:32.138792 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 29 12:01:32.138810 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593908) Jan 29 12:01:32.138825 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 12:01:32.138840 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 12:01:32.138855 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 12:01:32.138870 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 12:01:32.138885 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 12:01:32.138900 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 12:01:32.138915 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 29 12:01:32.138930 kernel: RETBleed: Vulnerable Jan 29 12:01:32.138947 kernel: Speculative Store Bypass: Vulnerable Jan 29 12:01:32.138961 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:32.138976 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 12:01:32.138991 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 12:01:32.139006 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 12:01:32.139020 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 12:01:32.139034 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 12:01:32.139049 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 29 12:01:32.139064 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 29 12:01:32.139079 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 29 12:01:32.139094 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 12:01:32.139111 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 29 12:01:32.139126 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 29 12:01:32.139140 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 29 12:01:32.139155 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 29 12:01:32.139170 kernel: Freeing SMP alternatives memory: 32K Jan 29 12:01:32.139184 kernel: pid_max: default: 32768 minimum: 301 Jan 29 12:01:32.142237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 12:01:32.142251 kernel: landlock: Up and running. Jan 29 12:01:32.142260 kernel: SELinux: Initializing. Jan 29 12:01:32.142280 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.142295 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.142309 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 29 12:01:32.142330 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:32.142345 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:32.142360 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 12:01:32.142375 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 29 12:01:32.142390 kernel: signal: max sigframe size: 3632 Jan 29 12:01:32.142405 kernel: rcu: Hierarchical SRCU implementation. Jan 29 12:01:32.142420 kernel: rcu: Max phase no-delay instances is 400. Jan 29 12:01:32.142435 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 12:01:32.142450 kernel: smp: Bringing up secondary CPUs ... Jan 29 12:01:32.142468 kernel: smpboot: x86: Booting SMP configuration: Jan 29 12:01:32.142482 kernel: .... node #0, CPUs: #1 Jan 29 12:01:32.142498 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 29 12:01:32.142515 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 29 12:01:32.142531 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 12:01:32.142546 kernel: smpboot: Max logical packages: 1 Jan 29 12:01:32.142561 kernel: smpboot: Total of 2 processors activated (10375.63 BogoMIPS) Jan 29 12:01:32.142576 kernel: devtmpfs: initialized Jan 29 12:01:32.142594 kernel: x86/mm: Memory block size: 128MB Jan 29 12:01:32.142609 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 29 12:01:32.142624 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 12:01:32.142638 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 12:01:32.142653 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 12:01:32.142668 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 12:01:32.142683 kernel: audit: initializing netlink subsys (disabled) Jan 29 12:01:32.142698 kernel: audit: type=2000 audit(1738152090.028:1): state=initialized audit_enabled=0 res=1 Jan 29 12:01:32.142712 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 12:01:32.142730 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 12:01:32.142745 kernel: cpuidle: using governor menu Jan 29 12:01:32.142760 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 12:01:32.142774 kernel: dca service started, version 1.12.1 Jan 29 12:01:32.142790 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 29 12:01:32.142804 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 12:01:32.142819 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 12:01:32.142835 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 12:01:32.142850 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 12:01:32.142868 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 12:01:32.142883 kernel: ACPI: Added _OSI(Module Device) Jan 29 12:01:32.142899 kernel: ACPI: Added _OSI(Processor Device) Jan 29 12:01:32.142913 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 12:01:32.142929 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 12:01:32.142944 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 12:01:32.142959 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 12:01:32.142973 kernel: ACPI: Interpreter enabled Jan 29 12:01:32.142988 kernel: ACPI: PM: (supports S0 S5) Jan 29 12:01:32.143006 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 12:01:32.143020 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 12:01:32.143035 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 29 12:01:32.143050 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 29 12:01:32.143065 kernel: iommu: Default domain type: Translated Jan 29 12:01:32.143080 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 12:01:32.143095 kernel: efivars: Registered efivars operations Jan 29 12:01:32.143110 kernel: PCI: Using ACPI for IRQ routing Jan 29 12:01:32.143124 kernel: PCI: System does not support PCI Jan 29 12:01:32.143141 kernel: vgaarb: loaded Jan 29 12:01:32.143156 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 29 12:01:32.143171 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 12:01:32.148036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 12:01:32.148070 kernel: pnp: PnP ACPI init Jan 29 12:01:32.148086 kernel: pnp: PnP ACPI: found 3 devices Jan 29 12:01:32.148101 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 12:01:32.148117 kernel: NET: Registered PF_INET protocol family Jan 29 12:01:32.148131 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 12:01:32.148152 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 29 12:01:32.148166 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 12:01:32.148181 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 29 12:01:32.148206 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 29 12:01:32.148220 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 29 12:01:32.148234 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.148248 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 29 12:01:32.148275 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 12:01:32.148290 kernel: NET: Registered PF_XDP protocol family Jan 29 12:01:32.148308 kernel: PCI: CLS 0 bytes, default 64 Jan 29 12:01:32.148320 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 29 12:01:32.148332 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Jan 29 12:01:32.148347 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 12:01:32.148361 kernel: Initialise system trusted keyrings Jan 29 12:01:32.148375 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 29 12:01:32.148390 kernel: Key type asymmetric registered Jan 29 12:01:32.148405 kernel: Asymmetric key parser 'x509' registered Jan 29 12:01:32.148418 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 12:01:32.148435 kernel: io scheduler mq-deadline registered Jan 29 12:01:32.148449 kernel: io scheduler kyber registered Jan 29 12:01:32.148463 kernel: io scheduler bfq registered Jan 29 12:01:32.148478 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 12:01:32.148493 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 12:01:32.148506 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 12:01:32.148519 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 29 12:01:32.148532 kernel: i8042: PNP: No PS/2 controller found. Jan 29 12:01:32.148733 kernel: rtc_cmos 00:02: registered as rtc0 Jan 29 12:01:32.148874 kernel: rtc_cmos 00:02: setting system clock to 2025-01-29T12:01:31 UTC (1738152091) Jan 29 12:01:32.148999 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 29 12:01:32.149017 kernel: intel_pstate: CPU model not supported Jan 29 12:01:32.149032 kernel: efifb: probing for efifb Jan 29 12:01:32.149046 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 29 12:01:32.149061 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 29 12:01:32.149077 kernel: efifb: scrolling: redraw Jan 29 12:01:32.149099 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 29 12:01:32.149115 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:01:32.149131 kernel: fb0: EFI VGA frame buffer device Jan 29 12:01:32.149145 kernel: pstore: Using crash dump compression: deflate Jan 29 12:01:32.149159 kernel: pstore: Registered efi_pstore as persistent store backend Jan 29 12:01:32.149175 kernel: NET: Registered PF_INET6 protocol family Jan 29 12:01:32.149203 kernel: Segment Routing with IPv6 Jan 29 12:01:32.149220 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 12:01:32.149236 kernel: NET: Registered PF_PACKET protocol family Jan 29 12:01:32.149250 kernel: Key type dns_resolver registered Jan 29 12:01:32.149269 kernel: IPI shorthand broadcast: enabled Jan 29 12:01:32.149285 kernel: sched_clock: Marking stable (924004500, 49208500)->(1232265200, -259052200) Jan 29 12:01:32.149300 kernel: registered taskstats version 1 Jan 29 12:01:32.149316 kernel: Loading compiled-in X.509 certificates Jan 29 12:01:32.149331 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 29 12:01:32.149346 kernel: Key type .fscrypt registered Jan 29 12:01:32.149363 kernel: Key type fscrypt-provisioning registered Jan 29 12:01:32.149378 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 12:01:32.149397 kernel: ima: Allocated hash algorithm: sha1 Jan 29 12:01:32.149412 kernel: ima: No architecture policies found Jan 29 12:01:32.149428 kernel: clk: Disabling unused clocks Jan 29 12:01:32.149445 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 29 12:01:32.149461 kernel: Write protecting the kernel read-only data: 36864k Jan 29 12:01:32.149477 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 29 12:01:32.149494 kernel: Run /init as init process Jan 29 12:01:32.149508 kernel: with arguments: Jan 29 12:01:32.149523 kernel: /init Jan 29 12:01:32.149541 kernel: with environment: Jan 29 12:01:32.149555 kernel: HOME=/ Jan 29 12:01:32.149570 kernel: TERM=linux Jan 29 12:01:32.149585 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 12:01:32.149603 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:32.149622 systemd[1]: Detected virtualization microsoft. Jan 29 12:01:32.149638 systemd[1]: Detected architecture x86-64. Jan 29 12:01:32.149654 systemd[1]: Running in initrd. Jan 29 12:01:32.149672 systemd[1]: No hostname configured, using default hostname. Jan 29 12:01:32.149688 systemd[1]: Hostname set to . Jan 29 12:01:32.149704 systemd[1]: Initializing machine ID from random generator. Jan 29 12:01:32.149720 systemd[1]: Queued start job for default target initrd.target. Jan 29 12:01:32.149736 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:32.149752 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:32.149770 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 12:01:32.149786 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:32.149805 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 12:01:32.149821 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 12:01:32.149840 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 12:01:32.149856 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 12:01:32.149873 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:32.149890 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:32.149906 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:32.149925 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:32.149941 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:32.149957 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:32.149973 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:32.149989 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:32.150005 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 12:01:32.150021 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 12:01:32.150038 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:32.150054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:32.150074 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:32.150090 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:32.150106 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 12:01:32.150122 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:32.150138 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 12:01:32.150155 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 12:01:32.150170 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:32.150202 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:32.150222 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:32.150269 systemd-journald[176]: Collecting audit messages is disabled. Jan 29 12:01:32.150306 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:32.150323 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:32.150342 systemd-journald[176]: Journal started Jan 29 12:01:32.150393 systemd-journald[176]: Runtime Journal (/run/log/journal/ca6cb3618f9940b1a51555a8607d6b39) is 8.0M, max 158.8M, 150.8M free. Jan 29 12:01:32.132782 systemd-modules-load[177]: Inserted module 'overlay' Jan 29 12:01:32.155201 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:32.155899 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 12:01:32.169542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:32.178459 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:32.183806 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:32.185467 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:32.195360 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:32.210365 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:32.225210 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 12:01:32.236341 kernel: Bridge firewalling registered Jan 29 12:01:32.237499 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:32.243840 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 29 12:01:32.245051 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:32.252039 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:32.259343 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:32.272504 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 12:01:32.281387 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:32.289771 dracut-cmdline[206]: dracut-dracut-053 Jan 29 12:01:32.294037 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 29 12:01:32.311516 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:32.326510 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:32.374456 systemd-resolved[246]: Positive Trust Anchors: Jan 29 12:01:32.376938 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:32.376998 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:32.400331 kernel: SCSI subsystem initialized Jan 29 12:01:32.400481 systemd-resolved[246]: Defaulting to hostname 'linux'. Jan 29 12:01:32.403961 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:32.409608 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:32.419206 kernel: Loading iSCSI transport class v2.0-870. Jan 29 12:01:32.430213 kernel: iscsi: registered transport (tcp) Jan 29 12:01:32.452023 kernel: iscsi: registered transport (qla4xxx) Jan 29 12:01:32.452121 kernel: QLogic iSCSI HBA Driver Jan 29 12:01:32.487989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:32.496368 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 12:01:32.525160 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 12:01:32.525271 kernel: device-mapper: uevent: version 1.0.3 Jan 29 12:01:32.528398 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 12:01:32.569223 kernel: raid6: avx512x4 gen() 18205 MB/s Jan 29 12:01:32.588210 kernel: raid6: avx512x2 gen() 18508 MB/s Jan 29 12:01:32.607203 kernel: raid6: avx512x1 gen() 18429 MB/s Jan 29 12:01:32.626203 kernel: raid6: avx2x4 gen() 18446 MB/s Jan 29 12:01:32.646203 kernel: raid6: avx2x2 gen() 18493 MB/s Jan 29 12:01:32.666278 kernel: raid6: avx2x1 gen() 14005 MB/s Jan 29 12:01:32.666321 kernel: raid6: using algorithm avx512x2 gen() 18508 MB/s Jan 29 12:01:32.687248 kernel: raid6: .... xor() 30369 MB/s, rmw enabled Jan 29 12:01:32.687295 kernel: raid6: using avx512x2 recovery algorithm Jan 29 12:01:32.710234 kernel: xor: automatically using best checksumming function avx Jan 29 12:01:32.862221 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 12:01:32.871720 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:32.877482 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:32.896812 systemd-udevd[395]: Using default interface naming scheme 'v255'. Jan 29 12:01:32.903256 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:32.923366 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 12:01:32.936266 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 29 12:01:32.964389 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:32.972432 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:33.012819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:33.026370 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 12:01:33.066499 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:33.079459 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:33.087625 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 12:01:33.091487 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:33.094852 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:33.110362 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 12:01:33.128207 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 12:01:33.131204 kernel: AES CTR mode by8 optimization enabled Jan 29 12:01:33.134207 kernel: hv_vmbus: Vmbus version:5.2 Jan 29 12:01:33.147214 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:33.155416 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:33.155602 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:33.164347 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:33.170485 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:33.178899 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 29 12:01:33.178925 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 29 12:01:33.178937 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 29 12:01:33.174979 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:33.198631 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 29 12:01:33.197652 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:33.209908 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:33.225217 kernel: PTP clock support registered Jan 29 12:01:33.236205 kernel: hv_utils: Registering HyperV Utility Driver Jan 29 12:01:33.236260 kernel: hv_vmbus: registering driver hv_utils Jan 29 12:01:33.241308 kernel: hv_utils: Heartbeat IC version 3.0 Jan 29 12:01:33.241349 kernel: hv_utils: Shutdown IC version 3.2 Jan 29 12:01:33.993775 kernel: hv_utils: TimeSync IC version 4.0 Jan 29 12:01:33.993398 systemd-resolved[246]: Clock change detected. Flushing caches. Jan 29 12:01:34.009675 kernel: hv_vmbus: registering driver hv_netvsc Jan 29 12:01:34.011804 kernel: hv_vmbus: registering driver hv_storvsc Jan 29 12:01:34.017464 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 29 12:01:34.017505 kernel: scsi host0: storvsc_host_t Jan 29 12:01:34.024012 kernel: scsi host1: storvsc_host_t Jan 29 12:01:34.028994 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 29 12:01:34.030294 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:34.035337 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 29 12:01:34.045857 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 12:01:34.058670 kernel: hv_vmbus: registering driver hid_hyperv Jan 29 12:01:34.069014 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 29 12:01:34.069080 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 29 12:01:34.082356 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 29 12:01:34.089124 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 12:01:34.089161 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 29 12:01:34.093956 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:34.108802 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 29 12:01:34.123364 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 29 12:01:34.123576 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 12:01:34.123756 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 29 12:01:34.123926 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 29 12:01:34.124153 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:34.124174 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 12:01:34.199323 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: VF slot 1 added Jan 29 12:01:34.207996 kernel: hv_vmbus: registering driver hv_pci Jan 29 12:01:34.213427 kernel: hv_pci 0c1e5017-2d10-4a3d-a2aa-5612ac24e76d: PCI VMBus probing: Using version 0x10004 Jan 29 12:01:34.256708 kernel: hv_pci 0c1e5017-2d10-4a3d-a2aa-5612ac24e76d: PCI host bridge to bus 2d10:00 Jan 29 12:01:34.257273 kernel: pci_bus 2d10:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 29 12:01:34.257447 kernel: pci_bus 2d10:00: No busn resource found for root bus, will use [bus 00-ff] Jan 29 12:01:34.257608 kernel: pci 2d10:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 29 12:01:34.257795 kernel: pci 2d10:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:34.258046 kernel: pci 2d10:00:02.0: enabling Extended Tags Jan 29 12:01:34.258171 kernel: pci 2d10:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 2d10:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 29 12:01:34.258276 kernel: pci_bus 2d10:00: busn_res: [bus 00-ff] end is updated to 00 Jan 29 12:01:34.258370 kernel: pci 2d10:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 29 12:01:34.429611 kernel: mlx5_core 2d10:00:02.0: enabling device (0000 -> 0002) Jan 29 12:01:34.656609 kernel: mlx5_core 2d10:00:02.0: firmware version: 14.30.5000 Jan 29 12:01:34.656847 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: VF registering: eth1 Jan 29 12:01:34.657453 kernel: mlx5_core 2d10:00:02.0 eth1: joined to eth0 Jan 29 12:01:34.657675 kernel: mlx5_core 2d10:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 29 12:01:34.669003 kernel: mlx5_core 2d10:00:02.0 enP11536s1: renamed from eth1 Jan 29 12:01:34.669660 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 29 12:01:34.714070 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Jan 29 12:01:34.725888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 29 12:01:34.741008 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 12:01:34.776007 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (458) Jan 29 12:01:34.791216 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 29 12:01:34.794759 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 29 12:01:34.813282 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 12:01:35.840021 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 12:01:35.840565 disk-uuid[596]: The operation has completed successfully. Jan 29 12:01:35.916901 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 12:01:35.917038 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 12:01:35.943221 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 12:01:35.950647 sh[712]: Success Jan 29 12:01:35.982022 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 12:01:36.169337 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 12:01:36.181891 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 12:01:36.186560 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 12:01:36.202927 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 29 12:01:36.202992 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:36.206299 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 12:01:36.208972 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 12:01:36.211325 kernel: BTRFS info (device dm-0): using free space tree Jan 29 12:01:36.760454 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 12:01:36.764086 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 12:01:36.776233 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 12:01:36.782707 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 12:01:36.800353 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:36.800403 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:36.800417 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:36.824870 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:36.838837 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 12:01:36.841597 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:36.848965 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 12:01:36.860242 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 12:01:36.878477 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:36.888143 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:36.910636 systemd-networkd[896]: lo: Link UP Jan 29 12:01:36.910647 systemd-networkd[896]: lo: Gained carrier Jan 29 12:01:36.912871 systemd-networkd[896]: Enumeration completed Jan 29 12:01:36.912971 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:36.916515 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:36.916519 systemd-networkd[896]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:36.919777 systemd[1]: Reached target network.target - Network. Jan 29 12:01:36.977004 kernel: mlx5_core 2d10:00:02.0 enP11536s1: Link up Jan 29 12:01:37.017023 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: Data path switched to VF: enP11536s1 Jan 29 12:01:37.017902 systemd-networkd[896]: enP11536s1: Link UP Jan 29 12:01:37.018109 systemd-networkd[896]: eth0: Link UP Jan 29 12:01:37.018380 systemd-networkd[896]: eth0: Gained carrier Jan 29 12:01:37.018399 systemd-networkd[896]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:37.023269 systemd-networkd[896]: enP11536s1: Gained carrier Jan 29 12:01:37.056066 systemd-networkd[896]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:37.707916 ignition[869]: Ignition 2.19.0 Jan 29 12:01:37.707927 ignition[869]: Stage: fetch-offline Jan 29 12:01:37.707986 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.708000 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.708122 ignition[869]: parsed url from cmdline: "" Jan 29 12:01:37.708128 ignition[869]: no config URL provided Jan 29 12:01:37.708135 ignition[869]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:37.718364 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:37.708145 ignition[869]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:37.708152 ignition[869]: failed to fetch config: resource requires networking Jan 29 12:01:37.716345 ignition[869]: Ignition finished successfully Jan 29 12:01:37.734191 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 12:01:37.750962 ignition[904]: Ignition 2.19.0 Jan 29 12:01:37.750974 ignition[904]: Stage: fetch Jan 29 12:01:37.751203 ignition[904]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.751217 ignition[904]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.751318 ignition[904]: parsed url from cmdline: "" Jan 29 12:01:37.751321 ignition[904]: no config URL provided Jan 29 12:01:37.751326 ignition[904]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 12:01:37.751333 ignition[904]: no config at "/usr/lib/ignition/user.ign" Jan 29 12:01:37.751356 ignition[904]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 29 12:01:37.832094 ignition[904]: GET result: OK Jan 29 12:01:37.832237 ignition[904]: config has been read from IMDS userdata Jan 29 12:01:37.832280 ignition[904]: parsing config with SHA512: c81662c9d4eeb8a4d3e625485d0e62ad6dadfc15b57e47a4624f8785b18b1252136b50b31a9c28b2aeaee36f6119e50bec3e963d81abc1127c4b7442754745e9 Jan 29 12:01:37.839829 unknown[904]: fetched base config from "system" Jan 29 12:01:37.839850 unknown[904]: fetched base config from "system" Jan 29 12:01:37.840648 ignition[904]: fetch: fetch complete Jan 29 12:01:37.839861 unknown[904]: fetched user config from "azure" Jan 29 12:01:37.840656 ignition[904]: fetch: fetch passed Jan 29 12:01:37.842633 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 12:01:37.840717 ignition[904]: Ignition finished successfully Jan 29 12:01:37.861180 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 12:01:37.876732 ignition[910]: Ignition 2.19.0 Jan 29 12:01:37.876742 ignition[910]: Stage: kargs Jan 29 12:01:37.876974 ignition[910]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.877012 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.877858 ignition[910]: kargs: kargs passed Jan 29 12:01:37.877901 ignition[910]: Ignition finished successfully Jan 29 12:01:37.888213 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 12:01:37.897149 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 12:01:37.913166 ignition[916]: Ignition 2.19.0 Jan 29 12:01:37.913177 ignition[916]: Stage: disks Jan 29 12:01:37.915684 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 12:01:37.913420 ignition[916]: no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:37.919507 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:37.913430 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:37.924322 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 12:01:37.914281 ignition[916]: disks: disks passed Jan 29 12:01:37.927370 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:37.914319 ignition[916]: Ignition finished successfully Jan 29 12:01:37.934399 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:37.950158 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:37.958151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 12:01:38.013499 systemd-fsck[924]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 29 12:01:38.019001 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 12:01:38.031630 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 12:01:38.123001 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 29 12:01:38.123404 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 12:01:38.125476 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:38.159159 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:38.164577 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 12:01:38.168950 systemd-networkd[896]: enP11536s1: Gained IPv6LL Jan 29 12:01:38.176010 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (935) Jan 29 12:01:38.177172 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 12:01:38.184297 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:38.185526 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 12:01:38.197594 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:38.197621 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:38.197634 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:38.192762 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:38.203521 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:38.207823 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 12:01:38.217151 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 12:01:38.843762 coreos-metadata[937]: Jan 29 12:01:38.843 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 12:01:38.848591 coreos-metadata[937]: Jan 29 12:01:38.846 INFO Fetch successful Jan 29 12:01:38.848591 coreos-metadata[937]: Jan 29 12:01:38.846 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 29 12:01:38.859870 coreos-metadata[937]: Jan 29 12:01:38.859 INFO Fetch successful Jan 29 12:01:38.872186 systemd-networkd[896]: eth0: Gained IPv6LL Jan 29 12:01:38.876123 coreos-metadata[937]: Jan 29 12:01:38.876 INFO wrote hostname ci-4081.3.0-a-76e05e3785 to /sysroot/etc/hostname Jan 29 12:01:38.878057 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:38.946941 initrd-setup-root[965]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 12:01:38.978902 initrd-setup-root[972]: cut: /sysroot/etc/group: No such file or directory Jan 29 12:01:39.013369 initrd-setup-root[979]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 12:01:39.036015 initrd-setup-root[986]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 12:01:39.939177 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:39.951090 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 12:01:39.960197 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 12:01:39.966145 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 12:01:39.973516 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:39.992783 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 12:01:40.005556 ignition[1054]: INFO : Ignition 2.19.0 Jan 29 12:01:40.005556 ignition[1054]: INFO : Stage: mount Jan 29 12:01:40.012305 ignition[1054]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:40.012305 ignition[1054]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:40.012305 ignition[1054]: INFO : mount: mount passed Jan 29 12:01:40.012305 ignition[1054]: INFO : Ignition finished successfully Jan 29 12:01:40.007670 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 12:01:40.023178 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 12:01:40.041172 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 12:01:40.056537 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1065) Jan 29 12:01:40.056610 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 29 12:01:40.057996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 12:01:40.062564 kernel: BTRFS info (device sda6): using free space tree Jan 29 12:01:40.070131 kernel: BTRFS info (device sda6): auto enabling async discard Jan 29 12:01:40.072123 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 12:01:40.098245 ignition[1081]: INFO : Ignition 2.19.0 Jan 29 12:01:40.098245 ignition[1081]: INFO : Stage: files Jan 29 12:01:40.102631 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:40.102631 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:40.102631 ignition[1081]: DEBUG : files: compiled without relabeling support, skipping Jan 29 12:01:40.125241 ignition[1081]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 12:01:40.125241 ignition[1081]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 12:01:40.212184 ignition[1081]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 12:01:40.217193 ignition[1081]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 12:01:40.217193 ignition[1081]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 12:01:40.212675 unknown[1081]: wrote ssh authorized keys file for user: core Jan 29 12:01:40.258256 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:40.263482 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 29 12:01:40.308498 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 12:01:40.430242 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:40.436084 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 29 12:01:40.944157 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 12:01:41.302176 ignition[1081]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 29 12:01:41.302176 ignition[1081]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 29 12:01:41.317457 ignition[1081]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:41.322751 ignition[1081]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 12:01:41.322751 ignition[1081]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 29 12:01:41.330767 ignition[1081]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:41.334667 ignition[1081]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 12:01:41.339105 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:41.343416 ignition[1081]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 12:01:41.348296 ignition[1081]: INFO : files: files passed Jan 29 12:01:41.348296 ignition[1081]: INFO : Ignition finished successfully Jan 29 12:01:41.354532 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 12:01:41.361192 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 12:01:41.367218 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 12:01:41.383223 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 12:01:41.383362 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 12:01:41.410501 initrd-setup-root-after-ignition[1110]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:41.410501 initrd-setup-root-after-ignition[1110]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:41.419058 initrd-setup-root-after-ignition[1114]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 12:01:41.423942 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:41.425300 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 12:01:41.437254 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 12:01:41.463336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 12:01:41.463455 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 12:01:41.473188 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 12:01:41.475813 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 12:01:41.481070 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 12:01:41.493218 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 12:01:41.507495 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:41.517182 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 12:01:41.529413 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:41.535302 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:41.538521 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 12:01:41.543637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 12:01:41.543777 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 12:01:41.549315 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 12:01:41.553543 systemd[1]: Stopped target basic.target - Basic System. Jan 29 12:01:41.560955 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 12:01:41.565779 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 12:01:41.571170 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 12:01:41.576693 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 12:01:41.581886 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 12:01:41.587503 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 12:01:41.592864 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 12:01:41.599894 systemd[1]: Stopped target swap.target - Swaps. Jan 29 12:01:41.604054 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 12:01:41.604209 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 12:01:41.610571 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:41.610968 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:41.611349 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 12:01:41.618025 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:41.623013 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 12:01:41.623163 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 12:01:41.628951 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 12:01:41.629119 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 12:01:41.633959 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 12:01:41.634120 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 12:01:41.639709 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 12:01:41.639843 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 12:01:41.658609 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 12:01:41.675154 ignition[1134]: INFO : Ignition 2.19.0 Jan 29 12:01:41.678098 ignition[1134]: INFO : Stage: umount Jan 29 12:01:41.678098 ignition[1134]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 12:01:41.678098 ignition[1134]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 29 12:01:41.678098 ignition[1134]: INFO : umount: umount passed Jan 29 12:01:41.678098 ignition[1134]: INFO : Ignition finished successfully Jan 29 12:01:41.677887 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 12:01:41.682059 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 12:01:41.682235 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:41.690631 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 12:01:41.690771 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 12:01:41.704327 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 12:01:41.707294 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 12:01:41.720376 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 12:01:41.720484 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 12:01:41.726448 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 12:01:41.726547 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 12:01:41.731653 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 12:01:41.731703 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 12:01:41.732564 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 12:01:41.732600 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 12:01:41.733369 systemd[1]: Stopped target network.target - Network. Jan 29 12:01:41.740613 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 12:01:41.740671 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 12:01:41.743631 systemd[1]: Stopped target paths.target - Path Units. Jan 29 12:01:41.746162 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 12:01:41.746212 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:41.751722 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 12:01:41.756236 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 12:01:41.760576 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 12:01:41.763022 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 12:01:41.767727 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 12:01:41.767778 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 12:01:41.772426 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 12:01:41.772497 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 12:01:41.777839 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 12:01:41.780668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 12:01:41.788190 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 12:01:41.794340 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 12:01:41.799925 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 12:01:41.802044 systemd-networkd[896]: eth0: DHCPv6 lease lost Jan 29 12:01:41.804581 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 12:01:41.804687 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 12:01:41.808881 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 12:01:41.808998 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 12:01:41.816801 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 12:01:41.816868 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:41.832410 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 12:01:41.839238 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 12:01:41.839328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 12:01:41.842634 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 12:01:41.842679 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:41.847357 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 12:01:41.847402 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:41.852348 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 12:01:41.852394 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:41.863877 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:41.892586 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 12:01:41.892736 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:41.901920 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 12:01:41.902017 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:41.912250 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 12:01:41.914627 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:41.924380 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 12:01:41.924454 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 12:01:41.929931 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 12:01:41.930030 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 12:01:41.934512 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 12:01:41.934563 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 12:01:41.953008 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: Data path switched from VF: enP11536s1 Jan 29 12:01:41.957213 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 12:01:41.960059 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 12:01:41.960131 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:41.963442 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 12:01:41.963499 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:41.969835 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 12:01:41.969891 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:41.972967 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:41.973354 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:41.976996 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 12:01:41.977523 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 12:01:41.990564 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 12:01:41.990688 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 12:01:42.223448 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 12:01:42.223612 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 12:01:42.228406 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 12:01:42.233349 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 12:01:42.233435 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 12:01:42.247234 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 12:01:42.297662 systemd[1]: Switching root. Jan 29 12:01:42.369705 systemd-journald[176]: Journal stopped Jan 29 12:01:47.148613 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 29 12:01:47.148655 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 12:01:47.148673 kernel: SELinux: policy capability open_perms=1 Jan 29 12:01:47.148688 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 12:01:47.148702 kernel: SELinux: policy capability always_check_network=0 Jan 29 12:01:47.148716 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 12:01:47.148732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 12:01:47.148750 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 12:01:47.148766 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 12:01:47.148783 kernel: audit: type=1403 audit(1738152103.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 12:01:47.148800 systemd[1]: Successfully loaded SELinux policy in 129.061ms. Jan 29 12:01:47.148817 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.272ms. Jan 29 12:01:47.148835 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 12:01:47.148851 systemd[1]: Detected virtualization microsoft. Jan 29 12:01:47.148872 systemd[1]: Detected architecture x86-64. Jan 29 12:01:47.148889 systemd[1]: Detected first boot. Jan 29 12:01:47.148906 systemd[1]: Hostname set to . Jan 29 12:01:47.148924 systemd[1]: Initializing machine ID from random generator. Jan 29 12:01:47.148940 zram_generator::config[1176]: No configuration found. Jan 29 12:01:47.148961 systemd[1]: Populated /etc with preset unit settings. Jan 29 12:01:47.149040 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 12:01:47.149055 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 12:01:47.149067 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 12:01:47.149081 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 12:01:47.149090 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 12:01:47.149104 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 12:01:47.149117 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 12:01:47.149130 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 12:01:47.149142 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 12:01:47.149153 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 12:01:47.149163 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 12:01:47.149177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 12:01:47.149188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 12:01:47.149202 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 12:01:47.149219 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 12:01:47.149233 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 12:01:47.149249 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 12:01:47.149262 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 12:01:47.149276 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 12:01:47.149292 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 12:01:47.149317 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 12:01:47.149335 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 12:01:47.149356 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 12:01:47.149371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 12:01:47.149387 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 12:01:47.149404 systemd[1]: Reached target slices.target - Slice Units. Jan 29 12:01:47.149421 systemd[1]: Reached target swap.target - Swaps. Jan 29 12:01:47.149439 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 12:01:47.149455 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 12:01:47.149475 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 12:01:47.149490 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 12:01:47.149505 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 12:01:47.149521 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 12:01:47.149537 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 12:01:47.149557 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 12:01:47.149573 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 12:01:47.149584 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:47.149598 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 12:01:47.149611 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 12:01:47.149622 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 12:01:47.149635 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 12:01:47.149649 systemd[1]: Reached target machines.target - Containers. Jan 29 12:01:47.149663 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 12:01:47.149677 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:47.149687 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 12:01:47.149700 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 12:01:47.149710 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:47.149723 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:01:47.149734 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:47.149747 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 12:01:47.149757 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:47.149772 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 12:01:47.149782 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 12:01:47.149795 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 12:01:47.149805 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 12:01:47.149818 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 12:01:47.149828 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 12:01:47.149838 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 12:01:47.149848 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 12:01:47.149861 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 12:01:47.149871 kernel: fuse: init (API version 7.39) Jan 29 12:01:47.149880 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 12:01:47.149891 kernel: loop: module loaded Jan 29 12:01:47.149900 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 12:01:47.149910 systemd[1]: Stopped verity-setup.service. Jan 29 12:01:47.149920 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:47.149933 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 12:01:47.149943 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 12:01:47.149957 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 12:01:47.149969 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 12:01:47.149995 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 12:01:47.150006 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 12:01:47.150041 systemd-journald[1282]: Collecting audit messages is disabled. Jan 29 12:01:47.150070 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 12:01:47.150081 systemd-journald[1282]: Journal started Jan 29 12:01:47.150106 systemd-journald[1282]: Runtime Journal (/run/log/journal/7aa98f25f3ea47198c350445332c3469) is 8.0M, max 158.8M, 150.8M free. Jan 29 12:01:46.472822 systemd[1]: Queued start job for default target multi-user.target. Jan 29 12:01:46.561161 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 12:01:46.561545 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 12:01:47.160011 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 12:01:47.167450 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 12:01:47.171340 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 12:01:47.171500 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 12:01:47.174944 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:47.176167 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:47.179478 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:47.179949 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:47.184332 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 12:01:47.184504 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 12:01:47.187703 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:47.187863 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:47.192504 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 12:01:47.195685 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 12:01:47.199839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 12:01:47.204723 kernel: ACPI: bus type drm_connector registered Jan 29 12:01:47.206033 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:01:47.206327 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:01:47.227446 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 12:01:47.239085 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 12:01:47.255094 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 12:01:47.259898 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 12:01:47.260081 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 12:01:47.264617 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 12:01:47.274179 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 12:01:47.278772 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 12:01:47.281774 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:47.300217 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 12:01:47.305630 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 12:01:47.308612 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:47.312835 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 12:01:47.316353 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:47.317578 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 12:01:47.325162 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 12:01:47.335494 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 12:01:47.342440 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 12:01:47.346264 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 12:01:47.349610 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 12:01:47.353146 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 12:01:47.356771 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 12:01:47.364963 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 12:01:47.374152 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 12:01:47.378200 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 12:01:47.398631 udevadm[1322]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 12:01:47.408799 systemd-journald[1282]: Time spent on flushing to /var/log/journal/7aa98f25f3ea47198c350445332c3469 is 24.663ms for 962 entries. Jan 29 12:01:47.408799 systemd-journald[1282]: System Journal (/var/log/journal/7aa98f25f3ea47198c350445332c3469) is 8.0M, max 2.6G, 2.6G free. Jan 29 12:01:47.552740 systemd-journald[1282]: Received client request to flush runtime journal. Jan 29 12:01:47.552798 kernel: loop0: detected capacity change from 0 to 210664 Jan 29 12:01:47.552824 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 12:01:47.494181 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 12:01:47.534563 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 12:01:47.536343 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 12:01:47.554042 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 12:01:47.559898 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Jan 29 12:01:47.559923 systemd-tmpfiles[1313]: ACLs are not supported, ignoring. Jan 29 12:01:47.566925 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 12:01:47.576906 kernel: loop1: detected capacity change from 0 to 142488 Jan 29 12:01:47.581177 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 12:01:47.849590 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 12:01:47.857223 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 12:01:47.882434 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 29 12:01:47.882458 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Jan 29 12:01:47.887186 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 12:01:47.966131 kernel: loop2: detected capacity change from 0 to 31056 Jan 29 12:01:48.322038 kernel: loop3: detected capacity change from 0 to 140768 Jan 29 12:01:48.758274 kernel: loop4: detected capacity change from 0 to 210664 Jan 29 12:01:48.768427 kernel: loop5: detected capacity change from 0 to 142488 Jan 29 12:01:48.772877 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 12:01:48.781005 kernel: loop6: detected capacity change from 0 to 31056 Jan 29 12:01:48.785256 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 12:01:48.788998 kernel: loop7: detected capacity change from 0 to 140768 Jan 29 12:01:48.798101 (sd-merge)[1340]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 29 12:01:48.799155 (sd-merge)[1340]: Merged extensions into '/usr'. Jan 29 12:01:48.804250 systemd[1]: Reloading requested from client PID 1312 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 12:01:48.804374 systemd[1]: Reloading... Jan 29 12:01:48.828402 systemd-udevd[1342]: Using default interface naming scheme 'v255'. Jan 29 12:01:48.871031 zram_generator::config[1367]: No configuration found. Jan 29 12:01:49.013266 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:49.074083 systemd[1]: Reloading finished in 269 ms. Jan 29 12:01:49.103831 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 12:01:49.114230 systemd[1]: Starting ensure-sysext.service... Jan 29 12:01:49.120202 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 12:01:49.156113 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 12:01:49.156585 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 12:01:49.157507 systemd-tmpfiles[1427]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 12:01:49.157817 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Jan 29 12:01:49.157968 systemd-tmpfiles[1427]: ACLs are not supported, ignoring. Jan 29 12:01:49.168560 systemd[1]: Reloading requested from client PID 1426 ('systemctl') (unit ensure-sysext.service)... Jan 29 12:01:49.168583 systemd[1]: Reloading... Jan 29 12:01:49.178272 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:01:49.178288 systemd-tmpfiles[1427]: Skipping /boot Jan 29 12:01:49.188702 systemd-tmpfiles[1427]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 12:01:49.188723 systemd-tmpfiles[1427]: Skipping /boot Jan 29 12:01:49.243156 zram_generator::config[1452]: No configuration found. Jan 29 12:01:49.385146 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:49.451220 systemd[1]: Reloading finished in 282 ms. Jan 29 12:01:49.473596 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 12:01:49.485172 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:01:49.496214 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 12:01:49.500829 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 12:01:49.508215 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 12:01:49.520206 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 12:01:49.529302 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:49.529592 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:49.535586 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:49.541276 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:49.550090 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:49.553275 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:49.553467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:49.554718 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:49.554918 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:49.559384 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:49.559583 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:49.563914 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:49.564331 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:49.579860 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:49.580167 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:49.585935 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:49.597136 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:49.609969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:49.614296 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:49.617362 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 12:01:49.620377 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:49.622111 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 12:01:49.629444 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:49.629709 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:49.637299 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:49.637492 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:49.642767 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:49.643211 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:49.661363 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jan 29 12:01:49.664564 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:49.664969 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 12:01:49.672263 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 12:01:49.685444 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 12:01:49.693634 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 12:01:49.706199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 12:01:49.711115 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 12:01:49.711444 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 12:01:49.714504 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 12:01:49.715726 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 12:01:49.719641 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 12:01:49.723614 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 12:01:49.723870 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 12:01:49.728780 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 12:01:49.728973 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 12:01:49.733360 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 12:01:49.733557 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 12:01:49.737396 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 12:01:49.737583 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 12:01:49.745648 systemd[1]: Finished ensure-sysext.service. Jan 29 12:01:49.761489 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 12:01:49.761581 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 12:01:49.814089 augenrules[1565]: No rules Jan 29 12:01:49.816129 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:01:49.828390 systemd-resolved[1519]: Positive Trust Anchors: Jan 29 12:01:49.828408 systemd-resolved[1519]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 12:01:49.828452 systemd-resolved[1519]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 12:01:49.870372 systemd-resolved[1519]: Using system hostname 'ci-4081.3.0-a-76e05e3785'. Jan 29 12:01:49.872501 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 12:01:49.875690 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 12:01:50.958061 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 12:01:50.964036 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 12:01:51.555640 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 12:01:51.571248 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 12:01:51.644663 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 12:01:51.757011 kernel: hv_vmbus: registering driver hv_balloon Jan 29 12:01:51.763875 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 12:01:51.763998 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 29 12:01:51.765668 systemd-networkd[1579]: lo: Link UP Jan 29 12:01:51.766221 systemd-networkd[1579]: lo: Gained carrier Jan 29 12:01:51.771842 systemd-networkd[1579]: Enumeration completed Jan 29 12:01:51.772076 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 12:01:51.777696 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:51.777710 systemd-networkd[1579]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:51.779397 systemd[1]: Reached target network.target - Network. Jan 29 12:01:51.788721 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 12:01:51.796322 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jan 29 12:01:51.822774 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:51.832322 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:51.832535 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:51.846955 kernel: mlx5_core 2d10:00:02.0 enP11536s1: Link up Jan 29 12:01:51.849187 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:51.863245 kernel: hv_vmbus: registering driver hyperv_fb Jan 29 12:01:51.867947 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 29 12:01:51.868024 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 29 12:01:51.874973 kernel: Console: switching to colour dummy device 80x25 Jan 29 12:01:51.880021 kernel: hv_netvsc 000d3a68-b45f-000d-3a68-b45f000d3a68 eth0: Data path switched to VF: enP11536s1 Jan 29 12:01:51.885992 systemd-networkd[1579]: enP11536s1: Link UP Jan 29 12:01:51.888020 systemd-networkd[1579]: eth0: Link UP Jan 29 12:01:51.888031 systemd-networkd[1579]: eth0: Gained carrier Jan 29 12:01:51.888052 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:51.895453 systemd-networkd[1579]: enP11536s1: Gained carrier Jan 29 12:01:51.903002 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 12:01:51.923228 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1580) Jan 29 12:01:51.941993 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 12:01:51.942736 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:51.959120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 12:01:52.141761 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 29 12:01:52.204388 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 12:01:52.234950 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 12:01:52.317007 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 29 12:01:52.351174 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 12:01:52.358192 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 12:01:52.438761 lvm[1663]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:01:52.469100 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 12:01:52.474387 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 12:01:52.481153 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 12:01:52.487125 lvm[1666]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 12:01:52.503248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 12:01:52.512765 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 12:01:53.070089 systemd-networkd[1579]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:53.208240 systemd-networkd[1579]: enP11536s1: Gained IPv6LL Jan 29 12:01:53.336338 systemd-networkd[1579]: eth0: Gained IPv6LL Jan 29 12:01:53.339409 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 12:01:53.343907 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 12:01:53.801944 ldconfig[1307]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 12:01:53.819998 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 12:01:53.833192 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 12:01:53.858549 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 12:01:53.862223 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 12:01:53.865087 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 12:01:53.868103 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 12:01:53.871463 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 12:01:53.874627 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 12:01:53.878655 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 12:01:53.881921 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 12:01:53.881965 systemd[1]: Reached target paths.target - Path Units. Jan 29 12:01:53.884311 systemd[1]: Reached target timers.target - Timer Units. Jan 29 12:01:53.887278 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 12:01:53.891458 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 12:01:53.921968 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 12:01:53.926856 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 12:01:53.930457 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 12:01:53.933395 systemd[1]: Reached target basic.target - Basic System. Jan 29 12:01:53.936070 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:01:53.936106 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 12:01:53.954228 systemd[1]: Starting chronyd.service - NTP client/server... Jan 29 12:01:53.961167 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 12:01:53.969228 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 12:01:53.976294 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 12:01:53.986166 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 12:01:53.993203 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 12:01:53.995918 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 12:01:53.995997 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 29 12:01:53.998287 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 29 12:01:54.003174 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 29 12:01:54.006125 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:01:54.012133 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 12:01:54.022226 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 12:01:54.029162 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 12:01:54.043233 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 12:01:54.044630 (chronyd)[1676]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 29 12:01:54.049458 jq[1680]: false Jan 29 12:01:54.057220 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 12:01:54.067227 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 12:01:54.072365 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 12:01:54.073972 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 12:01:54.075514 chronyd[1696]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 29 12:01:54.076517 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 12:01:54.085148 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 12:01:54.085436 KVP[1684]: KVP starting; pid is:1684 Jan 29 12:01:54.091069 chronyd[1696]: Timezone right/UTC failed leap second check, ignoring Jan 29 12:01:54.091332 chronyd[1696]: Loaded seccomp filter (level 2) Jan 29 12:01:54.094553 systemd[1]: Started chronyd.service - NTP client/server. Jan 29 12:01:54.104110 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 12:01:54.105107 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 12:01:54.118112 kernel: hv_utils: KVP IC version 4.0 Jan 29 12:01:54.117946 KVP[1684]: KVP LIC Version: 3.1 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found loop4 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found loop5 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found loop6 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found loop7 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda1 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda2 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda3 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found usr Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda4 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda6 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda7 Jan 29 12:01:54.125267 extend-filesystems[1681]: Found sda9 Jan 29 12:01:54.125267 extend-filesystems[1681]: Checking size of /dev/sda9 Jan 29 12:01:54.131586 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 12:01:54.210119 jq[1698]: true Jan 29 12:01:54.131840 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 12:01:54.164436 (ntainerd)[1712]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 12:01:54.190049 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 12:01:54.190315 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 12:01:54.226992 jq[1714]: true Jan 29 12:01:54.197054 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 12:01:54.238613 extend-filesystems[1681]: Old size kept for /dev/sda9 Jan 29 12:01:54.244265 extend-filesystems[1681]: Found sr0 Jan 29 12:01:54.260532 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 12:01:54.260803 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 12:01:54.270240 dbus-daemon[1679]: [system] SELinux support is enabled Jan 29 12:01:54.270466 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 12:01:54.279263 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 12:01:54.279308 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 12:01:54.284299 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 12:01:54.284333 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 12:01:54.289044 tar[1708]: linux-amd64/helm Jan 29 12:01:54.303064 update_engine[1697]: I20250129 12:01:54.302210 1697 main.cc:92] Flatcar Update Engine starting Jan 29 12:01:54.314463 systemd[1]: Started update-engine.service - Update Engine. Jan 29 12:01:54.325309 update_engine[1697]: I20250129 12:01:54.325235 1697 update_check_scheduler.cc:74] Next update check in 10m33s Jan 29 12:01:54.328218 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 12:01:54.407833 bash[1752]: Updated "/home/core/.ssh/authorized_keys" Jan 29 12:01:54.415618 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1585) Jan 29 12:01:54.411415 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 12:01:54.417846 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 12:01:54.439105 systemd-logind[1693]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 12:01:54.443035 systemd-logind[1693]: New seat seat0. Jan 29 12:01:54.456923 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 12:01:54.463832 coreos-metadata[1678]: Jan 29 12:01:54.456 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 29 12:01:54.470496 coreos-metadata[1678]: Jan 29 12:01:54.470 INFO Fetch successful Jan 29 12:01:54.470496 coreos-metadata[1678]: Jan 29 12:01:54.470 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 29 12:01:54.477104 coreos-metadata[1678]: Jan 29 12:01:54.477 INFO Fetch successful Jan 29 12:01:54.477104 coreos-metadata[1678]: Jan 29 12:01:54.477 INFO Fetching http://168.63.129.16/machine/bb0c450f-d212-4108-80f6-19b9e6876b02/01b37d91%2D0405%2D48cf%2D84b8%2Dd6e84aca3061.%5Fci%2D4081.3.0%2Da%2D76e05e3785?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 29 12:01:54.485185 coreos-metadata[1678]: Jan 29 12:01:54.484 INFO Fetch successful Jan 29 12:01:54.485185 coreos-metadata[1678]: Jan 29 12:01:54.485 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 29 12:01:54.503009 coreos-metadata[1678]: Jan 29 12:01:54.501 INFO Fetch successful Jan 29 12:01:54.557315 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 12:01:54.561268 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 12:01:54.780148 locksmithd[1749]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 12:01:55.433324 sshd_keygen[1722]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 12:01:55.478448 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 12:01:55.484008 tar[1708]: linux-amd64/LICENSE Jan 29 12:01:55.488329 tar[1708]: linux-amd64/README.md Jan 29 12:01:55.489545 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 12:01:55.496231 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 29 12:01:55.505373 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 12:01:55.505852 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 12:01:55.517390 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 12:01:55.522638 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 12:01:55.542476 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 12:01:55.550216 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 29 12:01:55.558234 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 12:01:55.566644 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 12:01:55.570295 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 12:01:55.752157 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:01:55.764104 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:01:55.782249 containerd[1712]: time="2025-01-29T12:01:55.782161400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 29 12:01:55.816716 containerd[1712]: time="2025-01-29T12:01:55.816172500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.817951 containerd[1712]: time="2025-01-29T12:01:55.817890800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:55.817951 containerd[1712]: time="2025-01-29T12:01:55.817922700Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 12:01:55.818392 containerd[1712]: time="2025-01-29T12:01:55.817941600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 12:01:55.818568 containerd[1712]: time="2025-01-29T12:01:55.818544700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 12:01:55.818621 containerd[1712]: time="2025-01-29T12:01:55.818570000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.818680 containerd[1712]: time="2025-01-29T12:01:55.818651100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:55.818680 containerd[1712]: time="2025-01-29T12:01:55.818667900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.818879 containerd[1712]: time="2025-01-29T12:01:55.818853600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:55.818879 containerd[1712]: time="2025-01-29T12:01:55.818875600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.818967 containerd[1712]: time="2025-01-29T12:01:55.818895500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:55.818967 containerd[1712]: time="2025-01-29T12:01:55.818909200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.820324 containerd[1712]: time="2025-01-29T12:01:55.820123600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.820450 containerd[1712]: time="2025-01-29T12:01:55.820424900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 12:01:55.820607 containerd[1712]: time="2025-01-29T12:01:55.820583700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 12:01:55.820669 containerd[1712]: time="2025-01-29T12:01:55.820607000Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 12:01:55.820974 containerd[1712]: time="2025-01-29T12:01:55.820732300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 12:01:55.820974 containerd[1712]: time="2025-01-29T12:01:55.820796200Z" level=info msg="metadata content store policy set" policy=shared Jan 29 12:01:55.840360 containerd[1712]: time="2025-01-29T12:01:55.840328400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 12:01:55.840440 containerd[1712]: time="2025-01-29T12:01:55.840384900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 12:01:55.840440 containerd[1712]: time="2025-01-29T12:01:55.840407800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 12:01:55.840440 containerd[1712]: time="2025-01-29T12:01:55.840428200Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 12:01:55.840548 containerd[1712]: time="2025-01-29T12:01:55.840449000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 12:01:55.840668 containerd[1712]: time="2025-01-29T12:01:55.840596800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841114400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841258100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841280600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841298800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841320000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841339300Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841356400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841375500Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841395000Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841412700Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841439300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841458100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841485400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.841705 containerd[1712]: time="2025-01-29T12:01:55.841503900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841520400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841538400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841556100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841574600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841590600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841608600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841626700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841646100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841661600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841677700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841694200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841715100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841742100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841757100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842337 containerd[1712]: time="2025-01-29T12:01:55.841771700Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.841842900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.841866400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.841960500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.842000000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.842016300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.842035500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.842050000Z" level=info msg="NRI interface is disabled by configuration." Jan 29 12:01:55.842811 containerd[1712]: time="2025-01-29T12:01:55.842064900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 12:01:55.843114 containerd[1712]: time="2025-01-29T12:01:55.842428200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 12:01:55.843114 containerd[1712]: time="2025-01-29T12:01:55.842507200Z" level=info msg="Connect containerd service" Jan 29 12:01:55.843114 containerd[1712]: time="2025-01-29T12:01:55.842551200Z" level=info msg="using legacy CRI server" Jan 29 12:01:55.843114 containerd[1712]: time="2025-01-29T12:01:55.842560300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 12:01:55.843114 containerd[1712]: time="2025-01-29T12:01:55.842700800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843531000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843674200Z" level=info msg="Start subscribing containerd event" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843740500Z" level=info msg="Start recovering state" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843809400Z" level=info msg="Start event monitor" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843826800Z" level=info msg="Start snapshots syncer" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843841000Z" level=info msg="Start cni network conf syncer for default" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.843850400Z" level=info msg="Start streaming server" Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.844345200Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 12:01:55.845549 containerd[1712]: time="2025-01-29T12:01:55.844398100Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 12:01:55.844602 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 12:01:55.849004 containerd[1712]: time="2025-01-29T12:01:55.847490300Z" level=info msg="containerd successfully booted in 0.066260s" Jan 29 12:01:55.849185 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 12:01:55.853950 systemd[1]: Startup finished in 970ms (firmware) + 26.986s (loader) + 1.072s (kernel) + 11.273s (initrd) + 12.116s (userspace) = 52.419s. Jan 29 12:01:56.247424 login[1818]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:01:56.249698 login[1819]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 12:01:56.262242 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 12:01:56.270323 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 12:01:56.274054 systemd-logind[1693]: New session 1 of user core. Jan 29 12:01:56.279349 systemd-logind[1693]: New session 2 of user core. Jan 29 12:01:56.293338 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 12:01:56.300493 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 12:01:56.309411 (systemd)[1840]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 12:01:56.443941 kubelet[1824]: E0129 12:01:56.443880 1824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:01:56.447288 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:01:56.447483 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:01:56.448365 systemd[1]: kubelet.service: Consumed 1.010s CPU time. Jan 29 12:01:56.513607 systemd[1840]: Queued start job for default target default.target. Jan 29 12:01:56.524091 systemd[1840]: Created slice app.slice - User Application Slice. Jan 29 12:01:56.524129 systemd[1840]: Reached target paths.target - Paths. Jan 29 12:01:56.524147 systemd[1840]: Reached target timers.target - Timers. Jan 29 12:01:56.525419 systemd[1840]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 12:01:56.537087 systemd[1840]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 12:01:56.537263 systemd[1840]: Reached target sockets.target - Sockets. Jan 29 12:01:56.537290 systemd[1840]: Reached target basic.target - Basic System. Jan 29 12:01:56.537337 systemd[1840]: Reached target default.target - Main User Target. Jan 29 12:01:56.537375 systemd[1840]: Startup finished in 219ms. Jan 29 12:01:56.537844 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 12:01:56.544161 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 12:01:56.545133 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 12:01:57.281863 waagent[1817]: 2025-01-29T12:01:57.281750Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 29 12:01:57.285496 waagent[1817]: 2025-01-29T12:01:57.285300Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 29 12:01:57.287966 waagent[1817]: 2025-01-29T12:01:57.287907Z INFO Daemon Daemon Python: 3.11.9 Jan 29 12:01:57.290436 waagent[1817]: 2025-01-29T12:01:57.290377Z INFO Daemon Daemon Run daemon Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.291883Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.293473Z INFO Daemon Daemon Using waagent for provisioning Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.294476Z INFO Daemon Daemon Activate resource disk Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.295262Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.300358Z INFO Daemon Daemon Found device: None Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.300693Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.301445Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.303793Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.304514Z INFO Daemon Daemon Running default provisioning handler Jan 29 12:01:57.325431 waagent[1817]: 2025-01-29T12:01:57.313174Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 29 12:01:57.325836 waagent[1817]: 2025-01-29T12:01:57.325694Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 29 12:01:57.333796 waagent[1817]: 2025-01-29T12:01:57.326867Z INFO Daemon Daemon cloud-init is enabled: False Jan 29 12:01:57.333796 waagent[1817]: 2025-01-29T12:01:57.327670Z INFO Daemon Daemon Copying ovf-env.xml Jan 29 12:01:57.425425 waagent[1817]: 2025-01-29T12:01:57.422249Z INFO Daemon Daemon Successfully mounted dvd Jan 29 12:01:57.440030 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 29 12:01:57.442888 waagent[1817]: 2025-01-29T12:01:57.442803Z INFO Daemon Daemon Detect protocol endpoint Jan 29 12:01:57.445682 waagent[1817]: 2025-01-29T12:01:57.445620Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 29 12:01:57.448491 waagent[1817]: 2025-01-29T12:01:57.448436Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 29 12:01:57.451784 waagent[1817]: 2025-01-29T12:01:57.451733Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 29 12:01:57.458370 waagent[1817]: 2025-01-29T12:01:57.452895Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 29 12:01:57.458370 waagent[1817]: 2025-01-29T12:01:57.453568Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 29 12:01:57.482267 waagent[1817]: 2025-01-29T12:01:57.482201Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 29 12:01:57.490301 waagent[1817]: 2025-01-29T12:01:57.483682Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 29 12:01:57.490301 waagent[1817]: 2025-01-29T12:01:57.484528Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 29 12:01:57.591451 waagent[1817]: 2025-01-29T12:01:57.591338Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 29 12:01:57.595315 waagent[1817]: 2025-01-29T12:01:57.595235Z INFO Daemon Daemon Forcing an update of the goal state. Jan 29 12:01:57.602265 waagent[1817]: 2025-01-29T12:01:57.602204Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 12:01:57.620418 waagent[1817]: 2025-01-29T12:01:57.620353Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 29 12:01:57.628143 waagent[1817]: 2025-01-29T12:01:57.621949Z INFO Daemon Jan 29 12:01:57.628143 waagent[1817]: 2025-01-29T12:01:57.623473Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: e8426068-d91a-4107-a3a7-3c25a7db45ee eTag: 18163082600334825723 source: Fabric] Jan 29 12:01:57.628143 waagent[1817]: 2025-01-29T12:01:57.624787Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 29 12:01:57.628143 waagent[1817]: 2025-01-29T12:01:57.626002Z INFO Daemon Jan 29 12:01:57.628143 waagent[1817]: 2025-01-29T12:01:57.626705Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 29 12:01:57.633004 waagent[1817]: 2025-01-29T12:01:57.631853Z INFO Daemon Daemon Downloading artifacts profile blob Jan 29 12:01:57.720423 waagent[1817]: 2025-01-29T12:01:57.720320Z INFO Daemon Downloaded certificate {'thumbprint': '15E75E62EC7DA85CE577C07E7AF0EA360784AC24', 'hasPrivateKey': False} Jan 29 12:01:57.730071 waagent[1817]: 2025-01-29T12:01:57.721950Z INFO Daemon Downloaded certificate {'thumbprint': '0B8D4D430BC53CA4B77D1FFF4491EE8F60732380', 'hasPrivateKey': True} Jan 29 12:01:57.730071 waagent[1817]: 2025-01-29T12:01:57.723265Z INFO Daemon Fetch goal state completed Jan 29 12:01:57.737266 waagent[1817]: 2025-01-29T12:01:57.737195Z INFO Daemon Daemon Starting provisioning Jan 29 12:01:57.744461 waagent[1817]: 2025-01-29T12:01:57.739011Z INFO Daemon Daemon Handle ovf-env.xml. Jan 29 12:01:57.744461 waagent[1817]: 2025-01-29T12:01:57.739788Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-76e05e3785] Jan 29 12:01:57.756683 waagent[1817]: 2025-01-29T12:01:57.756594Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-76e05e3785] Jan 29 12:01:57.763916 waagent[1817]: 2025-01-29T12:01:57.758152Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 29 12:01:57.763916 waagent[1817]: 2025-01-29T12:01:57.758907Z INFO Daemon Daemon Primary interface is [eth0] Jan 29 12:01:57.788147 systemd-networkd[1579]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 12:01:57.788158 systemd-networkd[1579]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 12:01:57.788219 systemd-networkd[1579]: eth0: DHCP lease lost Jan 29 12:01:57.789571 waagent[1817]: 2025-01-29T12:01:57.789454Z INFO Daemon Daemon Create user account if not exists Jan 29 12:01:57.804920 waagent[1817]: 2025-01-29T12:01:57.790820Z INFO Daemon Daemon User core already exists, skip useradd Jan 29 12:01:57.804920 waagent[1817]: 2025-01-29T12:01:57.791537Z INFO Daemon Daemon Configure sudoer Jan 29 12:01:57.804920 waagent[1817]: 2025-01-29T12:01:57.792265Z INFO Daemon Daemon Configure sshd Jan 29 12:01:57.804920 waagent[1817]: 2025-01-29T12:01:57.793374Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 29 12:01:57.804920 waagent[1817]: 2025-01-29T12:01:57.793994Z INFO Daemon Daemon Deploy ssh public key. Jan 29 12:01:57.806131 systemd-networkd[1579]: eth0: DHCPv6 lease lost Jan 29 12:01:57.834076 systemd-networkd[1579]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 29 12:01:58.882408 waagent[1817]: 2025-01-29T12:01:58.882344Z INFO Daemon Daemon Provisioning complete Jan 29 12:01:58.896446 waagent[1817]: 2025-01-29T12:01:58.896388Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 29 12:01:58.902919 waagent[1817]: 2025-01-29T12:01:58.897596Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 29 12:01:58.902919 waagent[1817]: 2025-01-29T12:01:58.898345Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 29 12:01:59.021731 waagent[1896]: 2025-01-29T12:01:59.021632Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 29 12:01:59.022196 waagent[1896]: 2025-01-29T12:01:59.021798Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 29 12:01:59.022196 waagent[1896]: 2025-01-29T12:01:59.021878Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 29 12:01:59.059522 waagent[1896]: 2025-01-29T12:01:59.059412Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 29 12:01:59.059752 waagent[1896]: 2025-01-29T12:01:59.059698Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:01:59.059838 waagent[1896]: 2025-01-29T12:01:59.059802Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:01:59.068018 waagent[1896]: 2025-01-29T12:01:59.067935Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 29 12:01:59.074086 waagent[1896]: 2025-01-29T12:01:59.074021Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 29 12:01:59.074680 waagent[1896]: 2025-01-29T12:01:59.074609Z INFO ExtHandler Jan 29 12:01:59.074794 waagent[1896]: 2025-01-29T12:01:59.074723Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 5e391626-c741-4919-933e-16c95bdf6d2c eTag: 18163082600334825723 source: Fabric] Jan 29 12:01:59.075207 waagent[1896]: 2025-01-29T12:01:59.075138Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 29 12:01:59.075882 waagent[1896]: 2025-01-29T12:01:59.075814Z INFO ExtHandler Jan 29 12:01:59.075963 waagent[1896]: 2025-01-29T12:01:59.075918Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 29 12:01:59.079813 waagent[1896]: 2025-01-29T12:01:59.079759Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 29 12:01:59.152276 waagent[1896]: 2025-01-29T12:01:59.152132Z INFO ExtHandler Downloaded certificate {'thumbprint': '15E75E62EC7DA85CE577C07E7AF0EA360784AC24', 'hasPrivateKey': False} Jan 29 12:01:59.152686 waagent[1896]: 2025-01-29T12:01:59.152626Z INFO ExtHandler Downloaded certificate {'thumbprint': '0B8D4D430BC53CA4B77D1FFF4491EE8F60732380', 'hasPrivateKey': True} Jan 29 12:01:59.153146 waagent[1896]: 2025-01-29T12:01:59.153097Z INFO ExtHandler Fetch goal state completed Jan 29 12:01:59.169575 waagent[1896]: 2025-01-29T12:01:59.169512Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1896 Jan 29 12:01:59.169726 waagent[1896]: 2025-01-29T12:01:59.169681Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 29 12:01:59.171286 waagent[1896]: 2025-01-29T12:01:59.171229Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 29 12:01:59.171651 waagent[1896]: 2025-01-29T12:01:59.171606Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 29 12:01:59.214230 waagent[1896]: 2025-01-29T12:01:59.214169Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 29 12:01:59.214518 waagent[1896]: 2025-01-29T12:01:59.214458Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 29 12:01:59.221938 waagent[1896]: 2025-01-29T12:01:59.221895Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 29 12:01:59.228811 systemd[1]: Reloading requested from client PID 1911 ('systemctl') (unit waagent.service)... Jan 29 12:01:59.228828 systemd[1]: Reloading... Jan 29 12:01:59.325016 zram_generator::config[1945]: No configuration found. Jan 29 12:01:59.442250 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:59.531840 systemd[1]: Reloading finished in 302 ms. Jan 29 12:01:59.560002 waagent[1896]: 2025-01-29T12:01:59.558047Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 29 12:01:59.567447 systemd[1]: Reloading requested from client PID 2002 ('systemctl') (unit waagent.service)... Jan 29 12:01:59.567464 systemd[1]: Reloading... Jan 29 12:01:59.651605 zram_generator::config[2039]: No configuration found. Jan 29 12:01:59.771045 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:01:59.852494 systemd[1]: Reloading finished in 284 ms. Jan 29 12:01:59.878015 waagent[1896]: 2025-01-29T12:01:59.877204Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 29 12:01:59.878015 waagent[1896]: 2025-01-29T12:01:59.877425Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 29 12:02:00.853812 waagent[1896]: 2025-01-29T12:02:00.853714Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 29 12:02:00.854887 waagent[1896]: 2025-01-29T12:02:00.854806Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 29 12:02:00.857784 waagent[1896]: 2025-01-29T12:02:00.857726Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 29 12:02:00.857931 waagent[1896]: 2025-01-29T12:02:00.857871Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:02:00.858475 waagent[1896]: 2025-01-29T12:02:00.858424Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 29 12:02:00.858597 waagent[1896]: 2025-01-29T12:02:00.858543Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 29 12:02:00.858686 waagent[1896]: 2025-01-29T12:02:00.858639Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:02:00.859409 waagent[1896]: 2025-01-29T12:02:00.859358Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 29 12:02:00.859472 waagent[1896]: 2025-01-29T12:02:00.859435Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 29 12:02:00.859699 waagent[1896]: 2025-01-29T12:02:00.859633Z INFO EnvHandler ExtHandler Configure routes Jan 29 12:02:00.859881 waagent[1896]: 2025-01-29T12:02:00.859838Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 29 12:02:00.860317 waagent[1896]: 2025-01-29T12:02:00.860265Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 29 12:02:00.860379 waagent[1896]: 2025-01-29T12:02:00.860341Z INFO EnvHandler ExtHandler Gateway:None Jan 29 12:02:00.860459 waagent[1896]: 2025-01-29T12:02:00.860425Z INFO EnvHandler ExtHandler Routes:None Jan 29 12:02:00.863003 waagent[1896]: 2025-01-29T12:02:00.861195Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 29 12:02:00.863003 waagent[1896]: 2025-01-29T12:02:00.861443Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 29 12:02:00.863003 waagent[1896]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 29 12:02:00.863003 waagent[1896]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 29 12:02:00.863003 waagent[1896]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 29 12:02:00.863003 waagent[1896]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:02:00.863003 waagent[1896]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:02:00.863003 waagent[1896]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 29 12:02:00.863003 waagent[1896]: 2025-01-29T12:02:00.861689Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 29 12:02:00.863003 waagent[1896]: 2025-01-29T12:02:00.862253Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 29 12:02:00.871579 waagent[1896]: 2025-01-29T12:02:00.871520Z INFO ExtHandler ExtHandler Jan 29 12:02:00.871725 waagent[1896]: 2025-01-29T12:02:00.871677Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e76974ac-1b39-4af6-a4b2-c3d685827f87 correlation 9d45a083-77b5-468a-b11b-34b8cd44c238 created: 2025-01-29T12:00:51.236389Z] Jan 29 12:02:00.872340 waagent[1896]: 2025-01-29T12:02:00.872275Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 29 12:02:00.875301 waagent[1896]: 2025-01-29T12:02:00.875247Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Jan 29 12:02:00.916730 waagent[1896]: 2025-01-29T12:02:00.916651Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2304465C-FE5B-4FD1-B46D-C53F17E1FA77;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 29 12:02:00.926182 waagent[1896]: 2025-01-29T12:02:00.926098Z INFO MonitorHandler ExtHandler Network interfaces: Jan 29 12:02:00.926182 waagent[1896]: Executing ['ip', '-a', '-o', 'link']: Jan 29 12:02:00.926182 waagent[1896]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 29 12:02:00.926182 waagent[1896]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:68:b4:5f brd ff:ff:ff:ff:ff:ff Jan 29 12:02:00.926182 waagent[1896]: 3: enP11536s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:68:b4:5f brd ff:ff:ff:ff:ff:ff\ altname enP11536p0s2 Jan 29 12:02:00.926182 waagent[1896]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 29 12:02:00.926182 waagent[1896]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 29 12:02:00.926182 waagent[1896]: 2: eth0 inet 10.200.8.19/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 29 12:02:00.926182 waagent[1896]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 29 12:02:00.926182 waagent[1896]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 29 12:02:00.926182 waagent[1896]: 2: eth0 inet6 fe80::20d:3aff:fe68:b45f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 12:02:00.926182 waagent[1896]: 3: enP11536s1 inet6 fe80::20d:3aff:fe68:b45f/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 29 12:02:01.039554 waagent[1896]: 2025-01-29T12:02:01.039471Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 29 12:02:01.039554 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:02:01.039554 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:02:01.039554 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:02:01.039554 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:02:01.039554 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:02:01.039554 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:02:01.039554 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 12:02:01.039554 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 12:02:01.039554 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 12:02:01.042824 waagent[1896]: 2025-01-29T12:02:01.042756Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 29 12:02:01.042824 waagent[1896]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:02:01.042824 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:02:01.042824 waagent[1896]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:02:01.042824 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:02:01.042824 waagent[1896]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 29 12:02:01.042824 waagent[1896]: pkts bytes target prot opt in out source destination Jan 29 12:02:01.042824 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 29 12:02:01.042824 waagent[1896]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 29 12:02:01.042824 waagent[1896]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 29 12:02:01.043305 waagent[1896]: 2025-01-29T12:02:01.043107Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 29 12:02:06.546416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 12:02:06.552221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:06.661295 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:06.665698 (kubelet)[2132]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:07.247327 kubelet[2132]: E0129 12:02:07.247262 2132 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:07.251569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:07.251766 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:17.296490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 12:02:17.302220 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:17.403725 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:17.408215 (kubelet)[2148]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:17.894318 chronyd[1696]: Selected source PHC0 Jan 29 12:02:17.998847 kubelet[2148]: E0129 12:02:17.998787 2148 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:18.000386 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:18.000583 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:28.046613 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 12:02:28.052292 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:28.159107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:28.171361 (kubelet)[2163]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:28.745512 kubelet[2163]: E0129 12:02:28.745449 2163 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:28.748328 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:28.748533 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:28.960491 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 12:02:28.965273 systemd[1]: Started sshd@0-10.200.8.19:22-10.200.16.10:52360.service - OpenSSH per-connection server daemon (10.200.16.10:52360). Jan 29 12:02:29.694765 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 52360 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:29.696656 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:29.700973 systemd-logind[1693]: New session 3 of user core. Jan 29 12:02:29.708164 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 12:02:30.270372 systemd[1]: Started sshd@1-10.200.8.19:22-10.200.16.10:52368.service - OpenSSH per-connection server daemon (10.200.16.10:52368). Jan 29 12:02:30.925813 sshd[2177]: Accepted publickey for core from 10.200.16.10 port 52368 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:30.927358 sshd[2177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:30.931318 systemd-logind[1693]: New session 4 of user core. Jan 29 12:02:30.938140 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 12:02:31.392287 sshd[2177]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:31.396811 systemd[1]: sshd@1-10.200.8.19:22-10.200.16.10:52368.service: Deactivated successfully. Jan 29 12:02:31.398873 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 12:02:31.399868 systemd-logind[1693]: Session 4 logged out. Waiting for processes to exit. Jan 29 12:02:31.400831 systemd-logind[1693]: Removed session 4. Jan 29 12:02:31.507173 systemd[1]: Started sshd@2-10.200.8.19:22-10.200.16.10:52384.service - OpenSSH per-connection server daemon (10.200.16.10:52384). Jan 29 12:02:32.155919 sshd[2184]: Accepted publickey for core from 10.200.16.10 port 52384 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:32.158545 sshd[2184]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:32.163309 systemd-logind[1693]: New session 5 of user core. Jan 29 12:02:32.181220 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 12:02:32.612204 sshd[2184]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:32.616473 systemd[1]: sshd@2-10.200.8.19:22-10.200.16.10:52384.service: Deactivated successfully. Jan 29 12:02:32.618519 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 12:02:32.619380 systemd-logind[1693]: Session 5 logged out. Waiting for processes to exit. Jan 29 12:02:32.620555 systemd-logind[1693]: Removed session 5. Jan 29 12:02:32.727159 systemd[1]: Started sshd@3-10.200.8.19:22-10.200.16.10:52390.service - OpenSSH per-connection server daemon (10.200.16.10:52390). Jan 29 12:02:33.376646 sshd[2191]: Accepted publickey for core from 10.200.16.10 port 52390 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:33.378206 sshd[2191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:33.383040 systemd-logind[1693]: New session 6 of user core. Jan 29 12:02:33.394164 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 12:02:33.837573 sshd[2191]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:33.842435 systemd[1]: sshd@3-10.200.8.19:22-10.200.16.10:52390.service: Deactivated successfully. Jan 29 12:02:33.844669 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 12:02:33.845570 systemd-logind[1693]: Session 6 logged out. Waiting for processes to exit. Jan 29 12:02:33.846539 systemd-logind[1693]: Removed session 6. Jan 29 12:02:33.952259 systemd[1]: Started sshd@4-10.200.8.19:22-10.200.16.10:52406.service - OpenSSH per-connection server daemon (10.200.16.10:52406). Jan 29 12:02:34.601631 sshd[2198]: Accepted publickey for core from 10.200.16.10 port 52406 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:34.603225 sshd[2198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:34.607399 systemd-logind[1693]: New session 7 of user core. Jan 29 12:02:34.618191 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 12:02:35.099637 sudo[2201]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 12:02:35.100035 sudo[2201]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:35.131031 sudo[2201]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:35.240723 sshd[2198]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:35.244116 systemd[1]: sshd@4-10.200.8.19:22-10.200.16.10:52406.service: Deactivated successfully. Jan 29 12:02:35.246181 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 12:02:35.247567 systemd-logind[1693]: Session 7 logged out. Waiting for processes to exit. Jan 29 12:02:35.248686 systemd-logind[1693]: Removed session 7. Jan 29 12:02:35.355308 systemd[1]: Started sshd@5-10.200.8.19:22-10.200.16.10:52408.service - OpenSSH per-connection server daemon (10.200.16.10:52408). Jan 29 12:02:36.002943 sshd[2206]: Accepted publickey for core from 10.200.16.10 port 52408 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:36.004569 sshd[2206]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:36.009399 systemd-logind[1693]: New session 8 of user core. Jan 29 12:02:36.015143 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 12:02:36.363895 sudo[2210]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 12:02:36.364283 sudo[2210]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:36.367699 sudo[2210]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:36.372567 sudo[2209]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 29 12:02:36.372909 sudo[2209]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:36.391311 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:36.392801 auditctl[2213]: No rules Jan 29 12:02:36.393177 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 12:02:36.393372 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:36.395809 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 29 12:02:36.431058 augenrules[2231]: No rules Jan 29 12:02:36.432413 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 29 12:02:36.434070 sudo[2209]: pam_unix(sudo:session): session closed for user root Jan 29 12:02:36.542853 sshd[2206]: pam_unix(sshd:session): session closed for user core Jan 29 12:02:36.546733 systemd[1]: sshd@5-10.200.8.19:22-10.200.16.10:52408.service: Deactivated successfully. Jan 29 12:02:36.548971 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 12:02:36.550678 systemd-logind[1693]: Session 8 logged out. Waiting for processes to exit. Jan 29 12:02:36.551924 systemd-logind[1693]: Removed session 8. Jan 29 12:02:36.661284 systemd[1]: Started sshd@6-10.200.8.19:22-10.200.16.10:35412.service - OpenSSH per-connection server daemon (10.200.16.10:35412). Jan 29 12:02:37.311606 sshd[2239]: Accepted publickey for core from 10.200.16.10 port 35412 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:02:37.313217 sshd[2239]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:02:37.318202 systemd-logind[1693]: New session 9 of user core. Jan 29 12:02:37.324188 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 12:02:37.670954 sudo[2242]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 12:02:37.671413 sudo[2242]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 12:02:38.796424 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 12:02:38.802619 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:38.992915 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:38.998246 (kubelet)[2264]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:39.393003 update_engine[1697]: I20250129 12:02:39.392891 1697 update_attempter.cc:509] Updating boot flags... Jan 29 12:02:39.499530 kubelet[2264]: E0129 12:02:39.496090 2264 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:39.501327 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 12:02:39.504861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:39.505513 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:39.516716 (dockerd)[2277]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 12:02:39.542005 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2286) Jan 29 12:02:39.660013 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2285) Jan 29 12:02:39.767493 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2285) Jan 29 12:02:39.910010 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 29 12:02:41.234848 dockerd[2277]: time="2025-01-29T12:02:41.234775784Z" level=info msg="Starting up" Jan 29 12:02:41.753663 dockerd[2277]: time="2025-01-29T12:02:41.753586065Z" level=info msg="Loading containers: start." Jan 29 12:02:41.913014 kernel: Initializing XFRM netlink socket Jan 29 12:02:42.065444 systemd-networkd[1579]: docker0: Link UP Jan 29 12:02:42.098251 dockerd[2277]: time="2025-01-29T12:02:42.098199598Z" level=info msg="Loading containers: done." Jan 29 12:02:42.188282 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1543830392-merged.mount: Deactivated successfully. Jan 29 12:02:42.195054 dockerd[2277]: time="2025-01-29T12:02:42.195005139Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 12:02:42.195176 dockerd[2277]: time="2025-01-29T12:02:42.195147244Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 29 12:02:42.195310 dockerd[2277]: time="2025-01-29T12:02:42.195277448Z" level=info msg="Daemon has completed initialization" Jan 29 12:02:42.254699 dockerd[2277]: time="2025-01-29T12:02:42.254146536Z" level=info msg="API listen on /run/docker.sock" Jan 29 12:02:42.254353 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 12:02:44.458006 containerd[1712]: time="2025-01-29T12:02:44.457853299Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 29 12:02:45.152703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount601318251.mount: Deactivated successfully. Jan 29 12:02:47.029460 containerd[1712]: time="2025-01-29T12:02:47.029389638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:47.032145 containerd[1712]: time="2025-01-29T12:02:47.032073319Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 29 12:02:47.035837 containerd[1712]: time="2025-01-29T12:02:47.035771532Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:47.040496 containerd[1712]: time="2025-01-29T12:02:47.040423273Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:47.042025 containerd[1712]: time="2025-01-29T12:02:47.041513006Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.583608606s" Jan 29 12:02:47.042025 containerd[1712]: time="2025-01-29T12:02:47.041558708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 29 12:02:47.069366 containerd[1712]: time="2025-01-29T12:02:47.069320951Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 29 12:02:48.924278 containerd[1712]: time="2025-01-29T12:02:48.924215614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:48.927317 containerd[1712]: time="2025-01-29T12:02:48.927244507Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 29 12:02:48.931813 containerd[1712]: time="2025-01-29T12:02:48.931751043Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:48.940083 containerd[1712]: time="2025-01-29T12:02:48.940028895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:48.942054 containerd[1712]: time="2025-01-29T12:02:48.941056026Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.871687874s" Jan 29 12:02:48.942054 containerd[1712]: time="2025-01-29T12:02:48.941106228Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 29 12:02:48.967082 containerd[1712]: time="2025-01-29T12:02:48.967029215Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 29 12:02:49.546435 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 29 12:02:49.557094 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:02:49.690274 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:02:49.700787 (kubelet)[2580]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:02:49.742408 kubelet[2580]: E0129 12:02:49.742351 2580 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:02:49.745023 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:02:49.745225 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:02:50.860067 containerd[1712]: time="2025-01-29T12:02:50.860010225Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:50.865495 containerd[1712]: time="2025-01-29T12:02:50.865433259Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 29 12:02:50.869690 containerd[1712]: time="2025-01-29T12:02:50.869616062Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:50.877477 containerd[1712]: time="2025-01-29T12:02:50.877417254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:50.878570 containerd[1712]: time="2025-01-29T12:02:50.878420579Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.911351163s" Jan 29 12:02:50.878570 containerd[1712]: time="2025-01-29T12:02:50.878461080Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 29 12:02:50.899534 containerd[1712]: time="2025-01-29T12:02:50.899486399Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 29 12:02:52.254219 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2880341970.mount: Deactivated successfully. Jan 29 12:02:52.739224 containerd[1712]: time="2025-01-29T12:02:52.739061869Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:52.741329 containerd[1712]: time="2025-01-29T12:02:52.741259223Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 29 12:02:52.743966 containerd[1712]: time="2025-01-29T12:02:52.743895588Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:52.748813 containerd[1712]: time="2025-01-29T12:02:52.748750608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:52.749564 containerd[1712]: time="2025-01-29T12:02:52.749377823Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.849849624s" Jan 29 12:02:52.749564 containerd[1712]: time="2025-01-29T12:02:52.749420124Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 29 12:02:52.772387 containerd[1712]: time="2025-01-29T12:02:52.772337190Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 29 12:02:53.436224 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901033900.mount: Deactivated successfully. Jan 29 12:02:54.721208 containerd[1712]: time="2025-01-29T12:02:54.721151154Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:54.725212 containerd[1712]: time="2025-01-29T12:02:54.725155553Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 29 12:02:54.729101 containerd[1712]: time="2025-01-29T12:02:54.729048649Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:54.733860 containerd[1712]: time="2025-01-29T12:02:54.733806566Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:54.735296 containerd[1712]: time="2025-01-29T12:02:54.734821291Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.962443501s" Jan 29 12:02:54.735296 containerd[1712]: time="2025-01-29T12:02:54.734862292Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 29 12:02:54.757417 containerd[1712]: time="2025-01-29T12:02:54.757379548Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 29 12:02:55.365037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4281709.mount: Deactivated successfully. Jan 29 12:02:55.393520 containerd[1712]: time="2025-01-29T12:02:55.393456435Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:55.397888 containerd[1712]: time="2025-01-29T12:02:55.397821743Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 29 12:02:55.403507 containerd[1712]: time="2025-01-29T12:02:55.403444982Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:55.407946 containerd[1712]: time="2025-01-29T12:02:55.407909692Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:55.408771 containerd[1712]: time="2025-01-29T12:02:55.408613509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 651.135559ms" Jan 29 12:02:55.408771 containerd[1712]: time="2025-01-29T12:02:55.408656310Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 29 12:02:55.430937 containerd[1712]: time="2025-01-29T12:02:55.430895559Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 29 12:02:56.155667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2990106961.mount: Deactivated successfully. Jan 29 12:02:58.616342 containerd[1712]: time="2025-01-29T12:02:58.616278415Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:58.618257 containerd[1712]: time="2025-01-29T12:02:58.618178765Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 29 12:02:58.621938 containerd[1712]: time="2025-01-29T12:02:58.621870663Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:58.626853 containerd[1712]: time="2025-01-29T12:02:58.626795992Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:02:58.628330 containerd[1712]: time="2025-01-29T12:02:58.627938622Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.196980062s" Jan 29 12:02:58.628330 containerd[1712]: time="2025-01-29T12:02:58.628001424Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 29 12:02:59.796460 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 29 12:02:59.804729 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:00.397251 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:00.404241 (kubelet)[2778]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 12:03:00.466753 kubelet[2778]: E0129 12:03:00.466689 2778 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 12:03:00.468824 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 12:03:00.469020 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 12:03:02.268278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:02.274259 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:02.300190 systemd[1]: Reloading requested from client PID 2793 ('systemctl') (unit session-9.scope)... Jan 29 12:03:02.300387 systemd[1]: Reloading... Jan 29 12:03:02.430012 zram_generator::config[2833]: No configuration found. Jan 29 12:03:02.543226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:02.622600 systemd[1]: Reloading finished in 321 ms. Jan 29 12:03:02.678268 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 12:03:02.678339 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 12:03:02.678775 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:02.687264 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:02.942379 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:02.948766 (kubelet)[2903]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:03:02.989228 kubelet[2903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:02.989228 kubelet[2903]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:03:02.989228 kubelet[2903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:02.989707 kubelet[2903]: I0129 12:03:02.989276 2903 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:03:03.369028 kubelet[2903]: I0129 12:03:03.368970 2903 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:03:03.371004 kubelet[2903]: I0129 12:03:03.369314 2903 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:03:03.371004 kubelet[2903]: I0129 12:03:03.370023 2903 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:03:03.596226 kubelet[2903]: I0129 12:03:03.596188 2903 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:03:03.597689 kubelet[2903]: E0129 12:03:03.597588 2903 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.606538 kubelet[2903]: I0129 12:03:03.606509 2903 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:03:03.606814 kubelet[2903]: I0129 12:03:03.606772 2903 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:03:03.607101 kubelet[2903]: I0129 12:03:03.606811 2903 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-76e05e3785","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:03:03.613425 kubelet[2903]: I0129 12:03:03.607742 2903 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:03:03.613425 kubelet[2903]: I0129 12:03:03.607768 2903 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:03:03.613702 kubelet[2903]: I0129 12:03:03.613675 2903 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:03.614528 kubelet[2903]: I0129 12:03:03.614491 2903 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:03:03.614857 kubelet[2903]: I0129 12:03:03.614623 2903 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:03:03.614857 kubelet[2903]: I0129 12:03:03.614664 2903 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:03:03.614857 kubelet[2903]: I0129 12:03:03.614685 2903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:03:03.616372 kubelet[2903]: W0129 12:03:03.615158 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-76e05e3785&limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.616372 kubelet[2903]: E0129 12:03:03.615226 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-76e05e3785&limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.620856 kubelet[2903]: W0129 12:03:03.620376 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.620856 kubelet[2903]: E0129 12:03:03.620425 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.621780 kubelet[2903]: I0129 12:03:03.621608 2903 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:03:03.624004 kubelet[2903]: I0129 12:03:03.623467 2903 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:03:03.624004 kubelet[2903]: W0129 12:03:03.623533 2903 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 12:03:03.624354 kubelet[2903]: I0129 12:03:03.624327 2903 server.go:1264] "Started kubelet" Jan 29 12:03:03.625722 kubelet[2903]: I0129 12:03:03.625439 2903 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:03:03.627019 kubelet[2903]: I0129 12:03:03.626439 2903 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:03:03.627430 kubelet[2903]: I0129 12:03:03.627374 2903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:03:03.627791 kubelet[2903]: I0129 12:03:03.627773 2903 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:03:03.629275 kubelet[2903]: I0129 12:03:03.628720 2903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:03:03.636122 kubelet[2903]: I0129 12:03:03.636106 2903 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:03:03.636492 kubelet[2903]: E0129 12:03:03.636377 2903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-76e05e3785.181f2833ae7ef929 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-76e05e3785,UID:ci-4081.3.0-a-76e05e3785,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-76e05e3785,},FirstTimestamp:2025-01-29 12:03:03.624300841 +0000 UTC m=+0.671688795,LastTimestamp:2025-01-29 12:03:03.624300841 +0000 UTC m=+0.671688795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-76e05e3785,}" Jan 29 12:03:03.637882 kubelet[2903]: E0129 12:03:03.637843 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-76e05e3785?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="200ms" Jan 29 12:03:03.638563 kubelet[2903]: I0129 12:03:03.638538 2903 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:03:03.638939 kubelet[2903]: W0129 12:03:03.638891 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.639046 kubelet[2903]: E0129 12:03:03.638952 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.640089 kubelet[2903]: I0129 12:03:03.640072 2903 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:03:03.640621 kubelet[2903]: I0129 12:03:03.640585 2903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:03:03.642465 kubelet[2903]: E0129 12:03:03.642371 2903 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:03:03.643592 kubelet[2903]: I0129 12:03:03.643555 2903 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:03:03.643592 kubelet[2903]: I0129 12:03:03.643577 2903 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:03:03.657228 kubelet[2903]: I0129 12:03:03.657185 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:03:03.658223 kubelet[2903]: I0129 12:03:03.658195 2903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:03:03.658223 kubelet[2903]: I0129 12:03:03.658222 2903 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:03:03.658344 kubelet[2903]: I0129 12:03:03.658244 2903 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:03:03.658344 kubelet[2903]: E0129 12:03:03.658286 2903 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:03:03.665654 kubelet[2903]: W0129 12:03:03.665614 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.665751 kubelet[2903]: E0129 12:03:03.665661 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:03.702755 kubelet[2903]: I0129 12:03:03.702724 2903 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:03:03.702755 kubelet[2903]: I0129 12:03:03.702746 2903 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:03:03.703013 kubelet[2903]: I0129 12:03:03.702770 2903 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:03.715560 kubelet[2903]: I0129 12:03:03.715516 2903 policy_none.go:49] "None policy: Start" Jan 29 12:03:03.716358 kubelet[2903]: I0129 12:03:03.716336 2903 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:03:03.716454 kubelet[2903]: I0129 12:03:03.716367 2903 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:03:03.727882 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 12:03:03.738575 kubelet[2903]: I0129 12:03:03.738133 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.738702 kubelet[2903]: E0129 12:03:03.738616 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.741724 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 12:03:03.745537 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 12:03:03.752746 kubelet[2903]: I0129 12:03:03.752711 2903 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:03:03.753025 kubelet[2903]: I0129 12:03:03.752965 2903 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:03:03.753208 kubelet[2903]: I0129 12:03:03.753122 2903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:03:03.755121 kubelet[2903]: E0129 12:03:03.755074 2903 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-76e05e3785\" not found" Jan 29 12:03:03.758911 kubelet[2903]: I0129 12:03:03.758873 2903 topology_manager.go:215] "Topology Admit Handler" podUID="b5f1fefd166af65a6d7f31081c4ac472" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.760665 kubelet[2903]: I0129 12:03:03.760616 2903 topology_manager.go:215] "Topology Admit Handler" podUID="fe542c87193858c23c338262077f781e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.762328 kubelet[2903]: I0129 12:03:03.762301 2903 topology_manager.go:215] "Topology Admit Handler" podUID="1b96274d6406e823d1ab2357603832ca" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.769416 systemd[1]: Created slice kubepods-burstable-podb5f1fefd166af65a6d7f31081c4ac472.slice - libcontainer container kubepods-burstable-podb5f1fefd166af65a6d7f31081c4ac472.slice. Jan 29 12:03:03.782035 systemd[1]: Created slice kubepods-burstable-podfe542c87193858c23c338262077f781e.slice - libcontainer container kubepods-burstable-podfe542c87193858c23c338262077f781e.slice. Jan 29 12:03:03.786698 systemd[1]: Created slice kubepods-burstable-pod1b96274d6406e823d1ab2357603832ca.slice - libcontainer container kubepods-burstable-pod1b96274d6406e823d1ab2357603832ca.slice. Jan 29 12:03:03.839117 kubelet[2903]: E0129 12:03:03.839056 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-76e05e3785?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="400ms" Jan 29 12:03:03.841344 kubelet[2903]: I0129 12:03:03.841277 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5f1fefd166af65a6d7f31081c4ac472-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-76e05e3785\" (UID: \"b5f1fefd166af65a6d7f31081c4ac472\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841344 kubelet[2903]: I0129 12:03:03.841327 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841609 kubelet[2903]: I0129 12:03:03.841357 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841609 kubelet[2903]: I0129 12:03:03.841383 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b96274d6406e823d1ab2357603832ca-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-76e05e3785\" (UID: \"1b96274d6406e823d1ab2357603832ca\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841609 kubelet[2903]: I0129 12:03:03.841405 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5f1fefd166af65a6d7f31081c4ac472-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-76e05e3785\" (UID: \"b5f1fefd166af65a6d7f31081c4ac472\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841609 kubelet[2903]: I0129 12:03:03.841430 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5f1fefd166af65a6d7f31081c4ac472-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-76e05e3785\" (UID: \"b5f1fefd166af65a6d7f31081c4ac472\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841609 kubelet[2903]: I0129 12:03:03.841454 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841758 kubelet[2903]: I0129 12:03:03.841490 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.841758 kubelet[2903]: I0129 12:03:03.841534 2903 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.941834 kubelet[2903]: I0129 12:03:03.941682 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:03.942820 kubelet[2903]: E0129 12:03:03.942781 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:04.079823 containerd[1712]: time="2025-01-29T12:03:04.079775040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-76e05e3785,Uid:b5f1fefd166af65a6d7f31081c4ac472,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:04.087087 containerd[1712]: time="2025-01-29T12:03:04.087049331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-76e05e3785,Uid:fe542c87193858c23c338262077f781e,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:04.089844 containerd[1712]: time="2025-01-29T12:03:04.089596998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-76e05e3785,Uid:1b96274d6406e823d1ab2357603832ca,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:04.240298 kubelet[2903]: E0129 12:03:04.240102 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-76e05e3785?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="800ms" Jan 29 12:03:04.345293 kubelet[2903]: I0129 12:03:04.345233 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:04.345653 kubelet[2903]: E0129 12:03:04.345623 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:04.677278 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2473746251.mount: Deactivated successfully. Jan 29 12:03:04.724435 kubelet[2903]: W0129 12:03:04.724350 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.724435 kubelet[2903]: E0129 12:03:04.724406 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.19:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.744858 containerd[1712]: time="2025-01-29T12:03:04.744800558Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:04.746896 containerd[1712]: time="2025-01-29T12:03:04.746842712Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 29 12:03:04.752040 containerd[1712]: time="2025-01-29T12:03:04.752005948Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:04.756597 containerd[1712]: time="2025-01-29T12:03:04.756563268Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:04.758573 containerd[1712]: time="2025-01-29T12:03:04.758522720Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:03:04.763749 containerd[1712]: time="2025-01-29T12:03:04.763711657Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:04.765605 containerd[1712]: time="2025-01-29T12:03:04.765330199Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 12:03:04.769869 containerd[1712]: time="2025-01-29T12:03:04.769822318Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 12:03:04.770897 containerd[1712]: time="2025-01-29T12:03:04.770648339Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 680.989739ms" Jan 29 12:03:04.771726 containerd[1712]: time="2025-01-29T12:03:04.771634465Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 691.766123ms" Jan 29 12:03:04.775511 kubelet[2903]: W0129 12:03:04.775449 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.775600 kubelet[2903]: E0129 12:03:04.775523 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.19:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.785623 containerd[1712]: time="2025-01-29T12:03:04.785584733Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.4634ms" Jan 29 12:03:04.871569 kubelet[2903]: W0129 12:03:04.871505 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-76e05e3785&limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.871569 kubelet[2903]: E0129 12:03:04.871573 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.19:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-76e05e3785&limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.899172 kubelet[2903]: W0129 12:03:04.899112 2903 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:04.899172 kubelet[2903]: E0129 12:03:04.899177 2903 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.19:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:05.041351 kubelet[2903]: E0129 12:03:05.041211 2903 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.19:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-76e05e3785?timeout=10s\": dial tcp 10.200.8.19:6443: connect: connection refused" interval="1.6s" Jan 29 12:03:05.148371 kubelet[2903]: I0129 12:03:05.148325 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:05.148826 kubelet[2903]: E0129 12:03:05.148786 2903 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.19:6443/api/v1/nodes\": dial tcp 10.200.8.19:6443: connect: connection refused" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:05.418660 kubelet[2903]: E0129 12:03:05.418534 2903 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.19:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.19:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-76e05e3785.181f2833ae7ef929 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-76e05e3785,UID:ci-4081.3.0-a-76e05e3785,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-76e05e3785,},FirstTimestamp:2025-01-29 12:03:03.624300841 +0000 UTC m=+0.671688795,LastTimestamp:2025-01-29 12:03:03.624300841 +0000 UTC m=+0.671688795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-76e05e3785,}" Jan 29 12:03:05.608277 kubelet[2903]: E0129 12:03:05.608230 2903 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.19:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.19:6443: connect: connection refused Jan 29 12:03:05.722557 containerd[1712]: time="2025-01-29T12:03:05.722074803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:05.722557 containerd[1712]: time="2025-01-29T12:03:05.722154105Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:05.722557 containerd[1712]: time="2025-01-29T12:03:05.722219206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:05.726854 containerd[1712]: time="2025-01-29T12:03:05.726268813Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:05.726854 containerd[1712]: time="2025-01-29T12:03:05.726335415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:05.726854 containerd[1712]: time="2025-01-29T12:03:05.726371916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:05.726854 containerd[1712]: time="2025-01-29T12:03:05.726496919Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:05.727171 containerd[1712]: time="2025-01-29T12:03:05.724797274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:05.732014 containerd[1712]: time="2025-01-29T12:03:05.728353768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:05.732014 containerd[1712]: time="2025-01-29T12:03:05.728483871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:05.732014 containerd[1712]: time="2025-01-29T12:03:05.728521972Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:05.732014 containerd[1712]: time="2025-01-29T12:03:05.728678177Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:05.788161 systemd[1]: Started cri-containerd-410b82b2cb33c2068603f667598df1b8c9cd9b3ded9b983616e9c7590a0f5f18.scope - libcontainer container 410b82b2cb33c2068603f667598df1b8c9cd9b3ded9b983616e9c7590a0f5f18. Jan 29 12:03:05.790376 systemd[1]: Started cri-containerd-4c39b33a352a7a0928e2d5ecd99f65a33f76702af9f8523d53e5be9d12f9406c.scope - libcontainer container 4c39b33a352a7a0928e2d5ecd99f65a33f76702af9f8523d53e5be9d12f9406c. Jan 29 12:03:05.792406 systemd[1]: Started cri-containerd-d268931a1ccb37dc26042d9a0e2b33ba479956c62f93ae8debe76097e5e1a206.scope - libcontainer container d268931a1ccb37dc26042d9a0e2b33ba479956c62f93ae8debe76097e5e1a206. Jan 29 12:03:05.880781 containerd[1712]: time="2025-01-29T12:03:05.878440422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-76e05e3785,Uid:fe542c87193858c23c338262077f781e,Namespace:kube-system,Attempt:0,} returns sandbox id \"410b82b2cb33c2068603f667598df1b8c9cd9b3ded9b983616e9c7590a0f5f18\"" Jan 29 12:03:05.884706 containerd[1712]: time="2025-01-29T12:03:05.884661586Z" level=info msg="CreateContainer within sandbox \"410b82b2cb33c2068603f667598df1b8c9cd9b3ded9b983616e9c7590a0f5f18\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 12:03:05.885726 containerd[1712]: time="2025-01-29T12:03:05.885690113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-76e05e3785,Uid:1b96274d6406e823d1ab2357603832ca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c39b33a352a7a0928e2d5ecd99f65a33f76702af9f8523d53e5be9d12f9406c\"" Jan 29 12:03:05.887119 containerd[1712]: time="2025-01-29T12:03:05.887087350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-76e05e3785,Uid:b5f1fefd166af65a6d7f31081c4ac472,Namespace:kube-system,Attempt:0,} returns sandbox id \"d268931a1ccb37dc26042d9a0e2b33ba479956c62f93ae8debe76097e5e1a206\"" Jan 29 12:03:05.890154 containerd[1712]: time="2025-01-29T12:03:05.890126530Z" level=info msg="CreateContainer within sandbox \"4c39b33a352a7a0928e2d5ecd99f65a33f76702af9f8523d53e5be9d12f9406c\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 12:03:05.891277 containerd[1712]: time="2025-01-29T12:03:05.891247159Z" level=info msg="CreateContainer within sandbox \"d268931a1ccb37dc26042d9a0e2b33ba479956c62f93ae8debe76097e5e1a206\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 12:03:05.973858 containerd[1712]: time="2025-01-29T12:03:05.973726024Z" level=info msg="CreateContainer within sandbox \"4c39b33a352a7a0928e2d5ecd99f65a33f76702af9f8523d53e5be9d12f9406c\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a4dca3bd0397fd090b6e0012f5d2a5f7226131a786915260fd67cde98920cc6f\"" Jan 29 12:03:05.975092 containerd[1712]: time="2025-01-29T12:03:05.975052657Z" level=info msg="StartContainer for \"a4dca3bd0397fd090b6e0012f5d2a5f7226131a786915260fd67cde98920cc6f\"" Jan 29 12:03:05.987428 containerd[1712]: time="2025-01-29T12:03:05.987385172Z" level=info msg="CreateContainer within sandbox \"410b82b2cb33c2068603f667598df1b8c9cd9b3ded9b983616e9c7590a0f5f18\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d58d0890bf2830464f4f047a9d980f159e6787fd49dac4942c5e29d6e097620\"" Jan 29 12:03:05.988575 containerd[1712]: time="2025-01-29T12:03:05.988379497Z" level=info msg="StartContainer for \"4d58d0890bf2830464f4f047a9d980f159e6787fd49dac4942c5e29d6e097620\"" Jan 29 12:03:05.990622 containerd[1712]: time="2025-01-29T12:03:05.990591653Z" level=info msg="CreateContainer within sandbox \"d268931a1ccb37dc26042d9a0e2b33ba479956c62f93ae8debe76097e5e1a206\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54151652dab70a2edd9afe7f5089ed2e2660a27036a47c87d4ea5d1b1662afc0\"" Jan 29 12:03:05.991118 containerd[1712]: time="2025-01-29T12:03:05.991089466Z" level=info msg="StartContainer for \"54151652dab70a2edd9afe7f5089ed2e2660a27036a47c87d4ea5d1b1662afc0\"" Jan 29 12:03:06.014194 systemd[1]: Started cri-containerd-a4dca3bd0397fd090b6e0012f5d2a5f7226131a786915260fd67cde98920cc6f.scope - libcontainer container a4dca3bd0397fd090b6e0012f5d2a5f7226131a786915260fd67cde98920cc6f. Jan 29 12:03:06.033975 systemd[1]: Started cri-containerd-4d58d0890bf2830464f4f047a9d980f159e6787fd49dac4942c5e29d6e097620.scope - libcontainer container 4d58d0890bf2830464f4f047a9d980f159e6787fd49dac4942c5e29d6e097620. Jan 29 12:03:06.048207 systemd[1]: Started cri-containerd-54151652dab70a2edd9afe7f5089ed2e2660a27036a47c87d4ea5d1b1662afc0.scope - libcontainer container 54151652dab70a2edd9afe7f5089ed2e2660a27036a47c87d4ea5d1b1662afc0. Jan 29 12:03:06.124291 containerd[1712]: time="2025-01-29T12:03:06.124237857Z" level=info msg="StartContainer for \"54151652dab70a2edd9afe7f5089ed2e2660a27036a47c87d4ea5d1b1662afc0\" returns successfully" Jan 29 12:03:06.140949 containerd[1712]: time="2025-01-29T12:03:06.140857080Z" level=info msg="StartContainer for \"4d58d0890bf2830464f4f047a9d980f159e6787fd49dac4942c5e29d6e097620\" returns successfully" Jan 29 12:03:06.148150 containerd[1712]: time="2025-01-29T12:03:06.148090464Z" level=info msg="StartContainer for \"a4dca3bd0397fd090b6e0012f5d2a5f7226131a786915260fd67cde98920cc6f\" returns successfully" Jan 29 12:03:06.738747 systemd[1]: run-containerd-runc-k8s.io-d268931a1ccb37dc26042d9a0e2b33ba479956c62f93ae8debe76097e5e1a206-runc.nRMr8M.mount: Deactivated successfully. Jan 29 12:03:06.752265 kubelet[2903]: I0129 12:03:06.752229 2903 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:08.934024 kubelet[2903]: I0129 12:03:08.933302 2903 apiserver.go:52] "Watching apiserver" Jan 29 12:03:08.940676 kubelet[2903]: I0129 12:03:08.938926 2903 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:08.940676 kubelet[2903]: I0129 12:03:08.939188 2903 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:03:09.645545 kubelet[2903]: W0129 12:03:09.644966 2903 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:10.520904 systemd[1]: Reloading requested from client PID 3175 ('systemctl') (unit session-9.scope)... Jan 29 12:03:10.520919 systemd[1]: Reloading... Jan 29 12:03:10.607097 zram_generator::config[3211]: No configuration found. Jan 29 12:03:10.745231 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 12:03:10.838703 systemd[1]: Reloading finished in 317 ms. Jan 29 12:03:10.881682 kubelet[2903]: E0129 12:03:10.881488 2903 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.0-a-76e05e3785.181f2833ae7ef929 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-76e05e3785,UID:ci-4081.3.0-a-76e05e3785,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-76e05e3785,},FirstTimestamp:2025-01-29 12:03:03.624300841 +0000 UTC m=+0.671688795,LastTimestamp:2025-01-29 12:03:03.624300841 +0000 UTC m=+0.671688795,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-76e05e3785,}" Jan 29 12:03:10.882002 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:10.895762 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 12:03:10.896003 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:10.902371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 12:03:11.005638 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 12:03:11.018632 (kubelet)[3282]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 12:03:11.065247 kubelet[3282]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:11.065247 kubelet[3282]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 12:03:11.065247 kubelet[3282]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 12:03:11.065897 kubelet[3282]: I0129 12:03:11.065317 3282 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 12:03:11.071379 kubelet[3282]: I0129 12:03:11.071331 3282 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 29 12:03:11.072017 kubelet[3282]: I0129 12:03:11.071542 3282 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 12:03:11.072518 kubelet[3282]: I0129 12:03:11.072493 3282 server.go:927] "Client rotation is on, will bootstrap in background" Jan 29 12:03:11.076702 kubelet[3282]: I0129 12:03:11.076676 3282 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 12:03:11.078352 kubelet[3282]: I0129 12:03:11.078324 3282 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 12:03:11.084972 kubelet[3282]: I0129 12:03:11.084951 3282 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 12:03:11.085346 kubelet[3282]: I0129 12:03:11.085316 3282 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 12:03:11.085580 kubelet[3282]: I0129 12:03:11.085411 3282 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-76e05e3785","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 29 12:03:11.085736 kubelet[3282]: I0129 12:03:11.085630 3282 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 12:03:11.085736 kubelet[3282]: I0129 12:03:11.085648 3282 container_manager_linux.go:301] "Creating device plugin manager" Jan 29 12:03:11.085736 kubelet[3282]: I0129 12:03:11.085704 3282 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:11.085876 kubelet[3282]: I0129 12:03:11.085809 3282 kubelet.go:400] "Attempting to sync node with API server" Jan 29 12:03:11.085876 kubelet[3282]: I0129 12:03:11.085824 3282 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 12:03:11.085876 kubelet[3282]: I0129 12:03:11.085857 3282 kubelet.go:312] "Adding apiserver pod source" Jan 29 12:03:11.086002 kubelet[3282]: I0129 12:03:11.085890 3282 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 12:03:11.090108 kubelet[3282]: I0129 12:03:11.089556 3282 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 29 12:03:11.090108 kubelet[3282]: I0129 12:03:11.089745 3282 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 12:03:11.091000 kubelet[3282]: I0129 12:03:11.090524 3282 server.go:1264] "Started kubelet" Jan 29 12:03:11.096590 kubelet[3282]: I0129 12:03:11.096574 3282 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 12:03:11.105765 kubelet[3282]: I0129 12:03:11.105736 3282 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 12:03:11.106970 kubelet[3282]: I0129 12:03:11.106952 3282 server.go:455] "Adding debug handlers to kubelet server" Jan 29 12:03:11.108155 kubelet[3282]: I0129 12:03:11.108106 3282 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 12:03:11.108436 kubelet[3282]: I0129 12:03:11.108421 3282 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 12:03:11.110409 kubelet[3282]: I0129 12:03:11.110392 3282 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 29 12:03:11.112696 kubelet[3282]: I0129 12:03:11.112366 3282 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 12:03:11.112696 kubelet[3282]: I0129 12:03:11.112535 3282 reconciler.go:26] "Reconciler: start to sync state" Jan 29 12:03:11.115350 kubelet[3282]: I0129 12:03:11.115325 3282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 12:03:11.125848 kubelet[3282]: I0129 12:03:11.124767 3282 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 12:03:11.126396 kubelet[3282]: I0129 12:03:11.126365 3282 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 12:03:11.126484 kubelet[3282]: I0129 12:03:11.126403 3282 kubelet.go:2337] "Starting kubelet main sync loop" Jan 29 12:03:11.126484 kubelet[3282]: E0129 12:03:11.126457 3282 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 12:03:11.127772 kubelet[3282]: I0129 12:03:11.127424 3282 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 12:03:11.132498 kubelet[3282]: E0129 12:03:11.132479 3282 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 12:03:11.133802 kubelet[3282]: I0129 12:03:11.133777 3282 factory.go:221] Registration of the containerd container factory successfully Jan 29 12:03:11.133802 kubelet[3282]: I0129 12:03:11.133799 3282 factory.go:221] Registration of the systemd container factory successfully Jan 29 12:03:11.168529 kubelet[3282]: I0129 12:03:11.168499 3282 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 12:03:11.168529 kubelet[3282]: I0129 12:03:11.168516 3282 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 12:03:11.168529 kubelet[3282]: I0129 12:03:11.168539 3282 state_mem.go:36] "Initialized new in-memory state store" Jan 29 12:03:11.169023 kubelet[3282]: I0129 12:03:11.168962 3282 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 12:03:11.169023 kubelet[3282]: I0129 12:03:11.168994 3282 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 12:03:11.169023 kubelet[3282]: I0129 12:03:11.169021 3282 policy_none.go:49] "None policy: Start" Jan 29 12:03:11.169656 kubelet[3282]: I0129 12:03:11.169641 3282 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 12:03:11.169731 kubelet[3282]: I0129 12:03:11.169696 3282 state_mem.go:35] "Initializing new in-memory state store" Jan 29 12:03:11.214156 kubelet[3282]: I0129 12:03:11.213972 3282 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.224709 kubelet[3282]: I0129 12:03:11.224597 3282 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.226664 kubelet[3282]: E0129 12:03:11.226637 3282 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:03:11.535067 kubelet[3282]: E0129 12:03:11.427099 3282 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 29 12:03:11.536476 kubelet[3282]: I0129 12:03:11.536228 3282 state_mem.go:75] "Updated machine memory state" Jan 29 12:03:11.536898 kubelet[3282]: I0129 12:03:11.536739 3282 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.550845 kubelet[3282]: I0129 12:03:11.550342 3282 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 12:03:11.553522 kubelet[3282]: I0129 12:03:11.551629 3282 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 12:03:11.553522 kubelet[3282]: I0129 12:03:11.551752 3282 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 12:03:11.828152 kubelet[3282]: I0129 12:03:11.827767 3282 topology_manager.go:215] "Topology Admit Handler" podUID="b5f1fefd166af65a6d7f31081c4ac472" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.828152 kubelet[3282]: I0129 12:03:11.827893 3282 topology_manager.go:215] "Topology Admit Handler" podUID="fe542c87193858c23c338262077f781e" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.828152 kubelet[3282]: I0129 12:03:11.828020 3282 topology_manager.go:215] "Topology Admit Handler" podUID="1b96274d6406e823d1ab2357603832ca" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.849688 kubelet[3282]: W0129 12:03:11.849245 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:11.849688 kubelet[3282]: W0129 12:03:11.849314 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:11.849688 kubelet[3282]: W0129 12:03:11.849587 3282 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 29 12:03:11.849688 kubelet[3282]: E0129 12:03:11.849636 3282 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-4081.3.0-a-76e05e3785\" already exists" pod="kube-system/kube-scheduler-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916169 kubelet[3282]: I0129 12:03:11.915690 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b5f1fefd166af65a6d7f31081c4ac472-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-76e05e3785\" (UID: \"b5f1fefd166af65a6d7f31081c4ac472\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916169 kubelet[3282]: I0129 12:03:11.915739 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916169 kubelet[3282]: I0129 12:03:11.915772 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916169 kubelet[3282]: I0129 12:03:11.915797 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916169 kubelet[3282]: I0129 12:03:11.915832 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1b96274d6406e823d1ab2357603832ca-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-76e05e3785\" (UID: \"1b96274d6406e823d1ab2357603832ca\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916505 kubelet[3282]: I0129 12:03:11.915854 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b5f1fefd166af65a6d7f31081c4ac472-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-76e05e3785\" (UID: \"b5f1fefd166af65a6d7f31081c4ac472\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916505 kubelet[3282]: I0129 12:03:11.915878 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b5f1fefd166af65a6d7f31081c4ac472-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-76e05e3785\" (UID: \"b5f1fefd166af65a6d7f31081c4ac472\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916505 kubelet[3282]: I0129 12:03:11.916037 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:11.916505 kubelet[3282]: I0129 12:03:11.916101 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/fe542c87193858c23c338262077f781e-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-76e05e3785\" (UID: \"fe542c87193858c23c338262077f781e\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" Jan 29 12:03:12.088301 kubelet[3282]: I0129 12:03:12.088174 3282 apiserver.go:52] "Watching apiserver" Jan 29 12:03:12.113511 kubelet[3282]: I0129 12:03:12.113458 3282 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 12:03:12.185415 kubelet[3282]: I0129 12:03:12.185352 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-76e05e3785" podStartSLOduration=3.185332018 podStartE2EDuration="3.185332018s" podCreationTimestamp="2025-01-29 12:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:12.185172914 +0000 UTC m=+1.162092296" watchObservedRunningTime="2025-01-29 12:03:12.185332018 +0000 UTC m=+1.162251300" Jan 29 12:03:12.185624 kubelet[3282]: I0129 12:03:12.185474 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-76e05e3785" podStartSLOduration=1.185465022 podStartE2EDuration="1.185465022s" podCreationTimestamp="2025-01-29 12:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:12.17557217 +0000 UTC m=+1.152491452" watchObservedRunningTime="2025-01-29 12:03:12.185465022 +0000 UTC m=+1.162384304" Jan 29 12:03:12.194074 kubelet[3282]: I0129 12:03:12.193909 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-76e05e3785" podStartSLOduration=1.193890336 podStartE2EDuration="1.193890336s" podCreationTimestamp="2025-01-29 12:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:12.193714732 +0000 UTC m=+1.170634014" watchObservedRunningTime="2025-01-29 12:03:12.193890336 +0000 UTC m=+1.170809618" Jan 29 12:03:16.973464 sudo[2242]: pam_unix(sudo:session): session closed for user root Jan 29 12:03:17.079953 sshd[2239]: pam_unix(sshd:session): session closed for user core Jan 29 12:03:17.084936 systemd-logind[1693]: Session 9 logged out. Waiting for processes to exit. Jan 29 12:03:17.085334 systemd[1]: sshd@6-10.200.8.19:22-10.200.16.10:35412.service: Deactivated successfully. Jan 29 12:03:17.088032 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 12:03:17.088370 systemd[1]: session-9.scope: Consumed 5.318s CPU time, 194.3M memory peak, 0B memory swap peak. Jan 29 12:03:17.090158 systemd-logind[1693]: Removed session 9. Jan 29 12:03:24.924024 kubelet[3282]: I0129 12:03:24.923897 3282 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 12:03:24.924663 containerd[1712]: time="2025-01-29T12:03:24.924537475Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 12:03:24.925174 kubelet[3282]: I0129 12:03:24.924792 3282 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 12:03:25.565021 kubelet[3282]: I0129 12:03:25.562327 3282 topology_manager.go:215] "Topology Admit Handler" podUID="a7437b7b-66fb-498a-8456-de7760db52d4" podNamespace="kube-system" podName="kube-proxy-nbr9q" Jan 29 12:03:25.576280 systemd[1]: Created slice kubepods-besteffort-poda7437b7b_66fb_498a_8456_de7760db52d4.slice - libcontainer container kubepods-besteffort-poda7437b7b_66fb_498a_8456_de7760db52d4.slice. Jan 29 12:03:25.705939 kubelet[3282]: I0129 12:03:25.705866 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtswm\" (UniqueName: \"kubernetes.io/projected/a7437b7b-66fb-498a-8456-de7760db52d4-kube-api-access-vtswm\") pod \"kube-proxy-nbr9q\" (UID: \"a7437b7b-66fb-498a-8456-de7760db52d4\") " pod="kube-system/kube-proxy-nbr9q" Jan 29 12:03:25.706283 kubelet[3282]: I0129 12:03:25.705938 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a7437b7b-66fb-498a-8456-de7760db52d4-kube-proxy\") pod \"kube-proxy-nbr9q\" (UID: \"a7437b7b-66fb-498a-8456-de7760db52d4\") " pod="kube-system/kube-proxy-nbr9q" Jan 29 12:03:25.706283 kubelet[3282]: I0129 12:03:25.706133 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a7437b7b-66fb-498a-8456-de7760db52d4-xtables-lock\") pod \"kube-proxy-nbr9q\" (UID: \"a7437b7b-66fb-498a-8456-de7760db52d4\") " pod="kube-system/kube-proxy-nbr9q" Jan 29 12:03:25.706283 kubelet[3282]: I0129 12:03:25.706164 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a7437b7b-66fb-498a-8456-de7760db52d4-lib-modules\") pod \"kube-proxy-nbr9q\" (UID: \"a7437b7b-66fb-498a-8456-de7760db52d4\") " pod="kube-system/kube-proxy-nbr9q" Jan 29 12:03:25.891943 containerd[1712]: time="2025-01-29T12:03:25.891806507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbr9q,Uid:a7437b7b-66fb-498a-8456-de7760db52d4,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:25.957066 containerd[1712]: time="2025-01-29T12:03:25.956404499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:25.961624 containerd[1712]: time="2025-01-29T12:03:25.959819388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:25.961624 containerd[1712]: time="2025-01-29T12:03:25.959918091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:25.965369 containerd[1712]: time="2025-01-29T12:03:25.965122427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:26.014021 systemd[1]: Started cri-containerd-127216980922c6ff23f306b5540444a17e3fa1f9d59276358cbcd6b63ebd0af8.scope - libcontainer container 127216980922c6ff23f306b5540444a17e3fa1f9d59276358cbcd6b63ebd0af8. Jan 29 12:03:26.031896 kubelet[3282]: I0129 12:03:26.031848 3282 topology_manager.go:215] "Topology Admit Handler" podUID="073678f7-f02c-4d33-9598-0b23b9d5aa85" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-llhqr" Jan 29 12:03:26.042636 systemd[1]: Created slice kubepods-besteffort-pod073678f7_f02c_4d33_9598_0b23b9d5aa85.slice - libcontainer container kubepods-besteffort-pod073678f7_f02c_4d33_9598_0b23b9d5aa85.slice. Jan 29 12:03:26.063604 containerd[1712]: time="2025-01-29T12:03:26.063557105Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nbr9q,Uid:a7437b7b-66fb-498a-8456-de7760db52d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"127216980922c6ff23f306b5540444a17e3fa1f9d59276358cbcd6b63ebd0af8\"" Jan 29 12:03:26.067794 containerd[1712]: time="2025-01-29T12:03:26.067747415Z" level=info msg="CreateContainer within sandbox \"127216980922c6ff23f306b5540444a17e3fa1f9d59276358cbcd6b63ebd0af8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 12:03:26.110903 containerd[1712]: time="2025-01-29T12:03:26.110857344Z" level=info msg="CreateContainer within sandbox \"127216980922c6ff23f306b5540444a17e3fa1f9d59276358cbcd6b63ebd0af8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4843fd152f246f64a104187f1b323c4879ca85016e3ef647ddb093d1b4dddb6a\"" Jan 29 12:03:26.113065 containerd[1712]: time="2025-01-29T12:03:26.111540862Z" level=info msg="StartContainer for \"4843fd152f246f64a104187f1b323c4879ca85016e3ef647ddb093d1b4dddb6a\"" Jan 29 12:03:26.143404 systemd[1]: Started cri-containerd-4843fd152f246f64a104187f1b323c4879ca85016e3ef647ddb093d1b4dddb6a.scope - libcontainer container 4843fd152f246f64a104187f1b323c4879ca85016e3ef647ddb093d1b4dddb6a. Jan 29 12:03:26.182686 containerd[1712]: time="2025-01-29T12:03:26.181933006Z" level=info msg="StartContainer for \"4843fd152f246f64a104187f1b323c4879ca85016e3ef647ddb093d1b4dddb6a\" returns successfully" Jan 29 12:03:26.209006 kubelet[3282]: I0129 12:03:26.208333 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/073678f7-f02c-4d33-9598-0b23b9d5aa85-var-lib-calico\") pod \"tigera-operator-7bc55997bb-llhqr\" (UID: \"073678f7-f02c-4d33-9598-0b23b9d5aa85\") " pod="tigera-operator/tigera-operator-7bc55997bb-llhqr" Jan 29 12:03:26.209006 kubelet[3282]: I0129 12:03:26.208431 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4ff5l\" (UniqueName: \"kubernetes.io/projected/073678f7-f02c-4d33-9598-0b23b9d5aa85-kube-api-access-4ff5l\") pod \"tigera-operator-7bc55997bb-llhqr\" (UID: \"073678f7-f02c-4d33-9598-0b23b9d5aa85\") " pod="tigera-operator/tigera-operator-7bc55997bb-llhqr" Jan 29 12:03:26.348621 containerd[1712]: time="2025-01-29T12:03:26.348575070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-llhqr,Uid:073678f7-f02c-4d33-9598-0b23b9d5aa85,Namespace:tigera-operator,Attempt:0,}" Jan 29 12:03:26.416824 containerd[1712]: time="2025-01-29T12:03:26.416634352Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:26.418454 containerd[1712]: time="2025-01-29T12:03:26.418162992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:26.418454 containerd[1712]: time="2025-01-29T12:03:26.418205194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:26.418454 containerd[1712]: time="2025-01-29T12:03:26.418333497Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:26.438175 systemd[1]: Started cri-containerd-e07e9f4f24e3f422fcd452afba02fef71838cbd63419795ebf025f4c1ed01182.scope - libcontainer container e07e9f4f24e3f422fcd452afba02fef71838cbd63419795ebf025f4c1ed01182. Jan 29 12:03:26.487304 containerd[1712]: time="2025-01-29T12:03:26.487268602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-llhqr,Uid:073678f7-f02c-4d33-9598-0b23b9d5aa85,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e07e9f4f24e3f422fcd452afba02fef71838cbd63419795ebf025f4c1ed01182\"" Jan 29 12:03:26.489211 containerd[1712]: time="2025-01-29T12:03:26.489180252Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 12:03:27.199039 kubelet[3282]: I0129 12:03:27.198957 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nbr9q" podStartSLOduration=2.198932341 podStartE2EDuration="2.198932341s" podCreationTimestamp="2025-01-29 12:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:03:27.198735335 +0000 UTC m=+16.175654617" watchObservedRunningTime="2025-01-29 12:03:27.198932341 +0000 UTC m=+16.175851723" Jan 29 12:03:28.354585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1326081573.mount: Deactivated successfully. Jan 29 12:03:28.942927 containerd[1712]: time="2025-01-29T12:03:28.942857314Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:28.944971 containerd[1712]: time="2025-01-29T12:03:28.944900667Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 12:03:28.949023 containerd[1712]: time="2025-01-29T12:03:28.948953873Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:28.953109 containerd[1712]: time="2025-01-29T12:03:28.953044380Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:28.953933 containerd[1712]: time="2025-01-29T12:03:28.953788700Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.464572547s" Jan 29 12:03:28.953933 containerd[1712]: time="2025-01-29T12:03:28.953827401Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 12:03:28.956633 containerd[1712]: time="2025-01-29T12:03:28.956551772Z" level=info msg="CreateContainer within sandbox \"e07e9f4f24e3f422fcd452afba02fef71838cbd63419795ebf025f4c1ed01182\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 12:03:29.002226 containerd[1712]: time="2025-01-29T12:03:29.002175067Z" level=info msg="CreateContainer within sandbox \"e07e9f4f24e3f422fcd452afba02fef71838cbd63419795ebf025f4c1ed01182\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"fdb990a5fb47aeb2121f16915f0107323e8d712c75e21d2c0dd0c0afcaf0d238\"" Jan 29 12:03:29.003702 containerd[1712]: time="2025-01-29T12:03:29.002749982Z" level=info msg="StartContainer for \"fdb990a5fb47aeb2121f16915f0107323e8d712c75e21d2c0dd0c0afcaf0d238\"" Jan 29 12:03:29.032158 systemd[1]: Started cri-containerd-fdb990a5fb47aeb2121f16915f0107323e8d712c75e21d2c0dd0c0afcaf0d238.scope - libcontainer container fdb990a5fb47aeb2121f16915f0107323e8d712c75e21d2c0dd0c0afcaf0d238. Jan 29 12:03:29.062459 containerd[1712]: time="2025-01-29T12:03:29.062403445Z" level=info msg="StartContainer for \"fdb990a5fb47aeb2121f16915f0107323e8d712c75e21d2c0dd0c0afcaf0d238\" returns successfully" Jan 29 12:03:29.207644 kubelet[3282]: I0129 12:03:29.207068 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-llhqr" podStartSLOduration=1.74074494 podStartE2EDuration="4.207028332s" podCreationTimestamp="2025-01-29 12:03:25 +0000 UTC" firstStartedPulling="2025-01-29 12:03:26.488599637 +0000 UTC m=+15.465518919" lastFinishedPulling="2025-01-29 12:03:28.954883029 +0000 UTC m=+17.931802311" observedRunningTime="2025-01-29 12:03:29.206726524 +0000 UTC m=+18.183645806" watchObservedRunningTime="2025-01-29 12:03:29.207028332 +0000 UTC m=+18.183947714" Jan 29 12:03:32.333953 kubelet[3282]: I0129 12:03:32.333899 3282 topology_manager.go:215] "Topology Admit Handler" podUID="6d9a289a-6456-45ce-9f1e-529b189ec8f5" podNamespace="calico-system" podName="calico-typha-796d9d8db6-q62vf" Jan 29 12:03:32.346508 systemd[1]: Created slice kubepods-besteffort-pod6d9a289a_6456_45ce_9f1e_529b189ec8f5.slice - libcontainer container kubepods-besteffort-pod6d9a289a_6456_45ce_9f1e_529b189ec8f5.slice. Jan 29 12:03:32.444891 kubelet[3282]: I0129 12:03:32.444841 3282 topology_manager.go:215] "Topology Admit Handler" podUID="e1404b25-7107-4952-b8f1-f980a54f3c58" podNamespace="calico-system" podName="calico-node-nxb6k" Jan 29 12:03:32.449485 kubelet[3282]: I0129 12:03:32.449452 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/6d9a289a-6456-45ce-9f1e-529b189ec8f5-typha-certs\") pod \"calico-typha-796d9d8db6-q62vf\" (UID: \"6d9a289a-6456-45ce-9f1e-529b189ec8f5\") " pod="calico-system/calico-typha-796d9d8db6-q62vf" Jan 29 12:03:32.449961 kubelet[3282]: I0129 12:03:32.449556 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6d9a289a-6456-45ce-9f1e-529b189ec8f5-tigera-ca-bundle\") pod \"calico-typha-796d9d8db6-q62vf\" (UID: \"6d9a289a-6456-45ce-9f1e-529b189ec8f5\") " pod="calico-system/calico-typha-796d9d8db6-q62vf" Jan 29 12:03:32.449961 kubelet[3282]: I0129 12:03:32.449897 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j22fz\" (UniqueName: \"kubernetes.io/projected/6d9a289a-6456-45ce-9f1e-529b189ec8f5-kube-api-access-j22fz\") pod \"calico-typha-796d9d8db6-q62vf\" (UID: \"6d9a289a-6456-45ce-9f1e-529b189ec8f5\") " pod="calico-system/calico-typha-796d9d8db6-q62vf" Jan 29 12:03:32.454629 systemd[1]: Created slice kubepods-besteffort-pode1404b25_7107_4952_b8f1_f980a54f3c58.slice - libcontainer container kubepods-besteffort-pode1404b25_7107_4952_b8f1_f980a54f3c58.slice. Jan 29 12:03:32.552045 kubelet[3282]: I0129 12:03:32.550835 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gwnh\" (UniqueName: \"kubernetes.io/projected/e1404b25-7107-4952-b8f1-f980a54f3c58-kube-api-access-7gwnh\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552045 kubelet[3282]: I0129 12:03:32.550886 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-xtables-lock\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552045 kubelet[3282]: I0129 12:03:32.550911 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e1404b25-7107-4952-b8f1-f980a54f3c58-node-certs\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552045 kubelet[3282]: I0129 12:03:32.550931 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-var-lib-calico\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552045 kubelet[3282]: I0129 12:03:32.550957 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-flexvol-driver-host\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552430 kubelet[3282]: I0129 12:03:32.550989 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-policysync\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552430 kubelet[3282]: I0129 12:03:32.551012 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e1404b25-7107-4952-b8f1-f980a54f3c58-tigera-ca-bundle\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552430 kubelet[3282]: I0129 12:03:32.551046 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-cni-net-dir\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552430 kubelet[3282]: I0129 12:03:32.551069 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-cni-bin-dir\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552430 kubelet[3282]: I0129 12:03:32.551131 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-lib-modules\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552629 kubelet[3282]: I0129 12:03:32.551153 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-var-run-calico\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.552629 kubelet[3282]: I0129 12:03:32.551174 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e1404b25-7107-4952-b8f1-f980a54f3c58-cni-log-dir\") pod \"calico-node-nxb6k\" (UID: \"e1404b25-7107-4952-b8f1-f980a54f3c58\") " pod="calico-system/calico-node-nxb6k" Jan 29 12:03:32.591085 kubelet[3282]: I0129 12:03:32.588769 3282 topology_manager.go:215] "Topology Admit Handler" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" podNamespace="calico-system" podName="csi-node-driver-s2ffd" Jan 29 12:03:32.591085 kubelet[3282]: E0129 12:03:32.589167 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:32.653062 kubelet[3282]: I0129 12:03:32.652928 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71138072-0e15-4069-b62b-58fc03bf5cf2-kubelet-dir\") pod \"csi-node-driver-s2ffd\" (UID: \"71138072-0e15-4069-b62b-58fc03bf5cf2\") " pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:32.656007 containerd[1712]: time="2025-01-29T12:03:32.654271188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796d9d8db6-q62vf,Uid:6d9a289a-6456-45ce-9f1e-529b189ec8f5,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:32.656489 kubelet[3282]: I0129 12:03:32.654385 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22gr\" (UniqueName: \"kubernetes.io/projected/71138072-0e15-4069-b62b-58fc03bf5cf2-kube-api-access-x22gr\") pod \"csi-node-driver-s2ffd\" (UID: \"71138072-0e15-4069-b62b-58fc03bf5cf2\") " pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:32.656489 kubelet[3282]: I0129 12:03:32.654491 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71138072-0e15-4069-b62b-58fc03bf5cf2-socket-dir\") pod \"csi-node-driver-s2ffd\" (UID: \"71138072-0e15-4069-b62b-58fc03bf5cf2\") " pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:32.656489 kubelet[3282]: I0129 12:03:32.654530 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71138072-0e15-4069-b62b-58fc03bf5cf2-registration-dir\") pod \"csi-node-driver-s2ffd\" (UID: \"71138072-0e15-4069-b62b-58fc03bf5cf2\") " pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:32.656489 kubelet[3282]: I0129 12:03:32.654588 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71138072-0e15-4069-b62b-58fc03bf5cf2-varrun\") pod \"csi-node-driver-s2ffd\" (UID: \"71138072-0e15-4069-b62b-58fc03bf5cf2\") " pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:32.669207 kubelet[3282]: E0129 12:03:32.669105 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.669207 kubelet[3282]: W0129 12:03:32.669132 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.669207 kubelet[3282]: E0129 12:03:32.669161 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.701689 kubelet[3282]: E0129 12:03:32.701401 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.701689 kubelet[3282]: W0129 12:03:32.701428 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.701689 kubelet[3282]: E0129 12:03:32.701454 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.723053 containerd[1712]: time="2025-01-29T12:03:32.722950694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:32.723268 containerd[1712]: time="2025-01-29T12:03:32.723067397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:32.723718 containerd[1712]: time="2025-01-29T12:03:32.723437705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:32.724121 containerd[1712]: time="2025-01-29T12:03:32.724034418Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:32.753190 systemd[1]: Started cri-containerd-34bc37edc9e7203051a7561ddac927019805686f1724a06d520510a9131f0b15.scope - libcontainer container 34bc37edc9e7203051a7561ddac927019805686f1724a06d520510a9131f0b15. Jan 29 12:03:32.756420 kubelet[3282]: E0129 12:03:32.756181 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.756420 kubelet[3282]: W0129 12:03:32.756202 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.756420 kubelet[3282]: E0129 12:03:32.756227 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.757132 kubelet[3282]: E0129 12:03:32.756625 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.757132 kubelet[3282]: W0129 12:03:32.756641 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.757132 kubelet[3282]: E0129 12:03:32.756679 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.757132 kubelet[3282]: E0129 12:03:32.757031 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.757132 kubelet[3282]: W0129 12:03:32.757043 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.757132 kubelet[3282]: E0129 12:03:32.757070 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.757394 kubelet[3282]: E0129 12:03:32.757376 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.757394 kubelet[3282]: W0129 12:03:32.757388 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.757477 kubelet[3282]: E0129 12:03:32.757411 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.757830 kubelet[3282]: E0129 12:03:32.757694 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.757830 kubelet[3282]: W0129 12:03:32.757711 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.757830 kubelet[3282]: E0129 12:03:32.757729 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.759646 kubelet[3282]: E0129 12:03:32.758012 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.759646 kubelet[3282]: W0129 12:03:32.758024 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.759646 kubelet[3282]: E0129 12:03:32.758048 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.759646 kubelet[3282]: E0129 12:03:32.758603 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.759646 kubelet[3282]: W0129 12:03:32.758616 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.759646 kubelet[3282]: E0129 12:03:32.758830 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.761642 kubelet[3282]: E0129 12:03:32.760835 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.761642 kubelet[3282]: W0129 12:03:32.760850 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.761642 kubelet[3282]: E0129 12:03:32.760869 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.761642 kubelet[3282]: E0129 12:03:32.761503 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.761642 kubelet[3282]: W0129 12:03:32.761517 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.764377 kubelet[3282]: E0129 12:03:32.763743 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.764377 kubelet[3282]: W0129 12:03:32.763758 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.764495 containerd[1712]: time="2025-01-29T12:03:32.764353702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nxb6k,Uid:e1404b25-7107-4952-b8f1-f980a54f3c58,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:32.766019 kubelet[3282]: E0129 12:03:32.764253 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.766019 kubelet[3282]: E0129 12:03:32.764841 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.766019 kubelet[3282]: E0129 12:03:32.765143 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.766019 kubelet[3282]: W0129 12:03:32.765155 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.766019 kubelet[3282]: E0129 12:03:32.765947 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.767201 kubelet[3282]: E0129 12:03:32.766646 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.767201 kubelet[3282]: W0129 12:03:32.766662 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.767201 kubelet[3282]: E0129 12:03:32.766893 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.767942 kubelet[3282]: E0129 12:03:32.767630 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.767942 kubelet[3282]: W0129 12:03:32.767642 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.768643 kubelet[3282]: E0129 12:03:32.768444 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.769232 kubelet[3282]: E0129 12:03:32.768972 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.769232 kubelet[3282]: W0129 12:03:32.769148 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.769849 kubelet[3282]: E0129 12:03:32.769675 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.770226 kubelet[3282]: E0129 12:03:32.769938 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.770226 kubelet[3282]: W0129 12:03:32.769949 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.770226 kubelet[3282]: E0129 12:03:32.770175 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.770828 kubelet[3282]: E0129 12:03:32.770242 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.770828 kubelet[3282]: W0129 12:03:32.770254 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.770828 kubelet[3282]: E0129 12:03:32.770279 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.770828 kubelet[3282]: E0129 12:03:32.770445 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.770828 kubelet[3282]: W0129 12:03:32.770454 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.770828 kubelet[3282]: E0129 12:03:32.770631 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.770828 kubelet[3282]: E0129 12:03:32.770660 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.770828 kubelet[3282]: W0129 12:03:32.770669 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.770828 kubelet[3282]: E0129 12:03:32.770750 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.772279 kubelet[3282]: E0129 12:03:32.770882 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.772279 kubelet[3282]: W0129 12:03:32.770892 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.772279 kubelet[3282]: E0129 12:03:32.770909 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.772279 kubelet[3282]: E0129 12:03:32.771249 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.772279 kubelet[3282]: W0129 12:03:32.771286 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.772279 kubelet[3282]: E0129 12:03:32.771308 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.773859 kubelet[3282]: E0129 12:03:32.773691 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.773859 kubelet[3282]: W0129 12:03:32.773706 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.773859 kubelet[3282]: E0129 12:03:32.773739 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.774578 kubelet[3282]: E0129 12:03:32.774011 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.774578 kubelet[3282]: W0129 12:03:32.774022 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.774578 kubelet[3282]: E0129 12:03:32.774379 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.776158 kubelet[3282]: E0129 12:03:32.775615 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.776158 kubelet[3282]: W0129 12:03:32.775630 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.776158 kubelet[3282]: E0129 12:03:32.775661 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.777103 kubelet[3282]: E0129 12:03:32.777085 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.777103 kubelet[3282]: W0129 12:03:32.777103 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.777932 kubelet[3282]: E0129 12:03:32.777168 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.778204 kubelet[3282]: E0129 12:03:32.778187 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.778204 kubelet[3282]: W0129 12:03:32.778204 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.778335 kubelet[3282]: E0129 12:03:32.778218 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.789263 kubelet[3282]: E0129 12:03:32.788773 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:32.789263 kubelet[3282]: W0129 12:03:32.788790 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:32.789263 kubelet[3282]: E0129 12:03:32.788804 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:32.824663 containerd[1712]: time="2025-01-29T12:03:32.823746904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:32.824663 containerd[1712]: time="2025-01-29T12:03:32.823823006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:32.824663 containerd[1712]: time="2025-01-29T12:03:32.823865807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:32.824663 containerd[1712]: time="2025-01-29T12:03:32.824166914Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:32.853264 containerd[1712]: time="2025-01-29T12:03:32.852487935Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-796d9d8db6-q62vf,Uid:6d9a289a-6456-45ce-9f1e-529b189ec8f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"34bc37edc9e7203051a7561ddac927019805686f1724a06d520510a9131f0b15\"" Jan 29 12:03:32.861897 containerd[1712]: time="2025-01-29T12:03:32.861851540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 12:03:32.870354 systemd[1]: Started cri-containerd-792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47.scope - libcontainer container 792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47. Jan 29 12:03:32.904688 containerd[1712]: time="2025-01-29T12:03:32.904545476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-nxb6k,Uid:e1404b25-7107-4952-b8f1-f980a54f3c58,Namespace:calico-system,Attempt:0,} returns sandbox id \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\"" Jan 29 12:03:34.127027 kubelet[3282]: E0129 12:03:34.126958 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:34.140891 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3028212343.mount: Deactivated successfully. Jan 29 12:03:34.947848 containerd[1712]: time="2025-01-29T12:03:34.947797439Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:34.955278 containerd[1712]: time="2025-01-29T12:03:34.955217330Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 29 12:03:34.959909 containerd[1712]: time="2025-01-29T12:03:34.959852650Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:34.967412 containerd[1712]: time="2025-01-29T12:03:34.967333142Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:34.968558 containerd[1712]: time="2025-01-29T12:03:34.968015360Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.106109718s" Jan 29 12:03:34.968558 containerd[1712]: time="2025-01-29T12:03:34.968053561Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 12:03:34.970859 containerd[1712]: time="2025-01-29T12:03:34.970828432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 12:03:34.992513 containerd[1712]: time="2025-01-29T12:03:34.992467789Z" level=info msg="CreateContainer within sandbox \"34bc37edc9e7203051a7561ddac927019805686f1724a06d520510a9131f0b15\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 12:03:35.025873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3558170283.mount: Deactivated successfully. Jan 29 12:03:35.040761 containerd[1712]: time="2025-01-29T12:03:35.040716431Z" level=info msg="CreateContainer within sandbox \"34bc37edc9e7203051a7561ddac927019805686f1724a06d520510a9131f0b15\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4a3e80550c9a6f0478620f72b88fb4566b96feb1528a6ce5b439596c0d3ec4e5\"" Jan 29 12:03:35.041302 containerd[1712]: time="2025-01-29T12:03:35.041275645Z" level=info msg="StartContainer for \"4a3e80550c9a6f0478620f72b88fb4566b96feb1528a6ce5b439596c0d3ec4e5\"" Jan 29 12:03:35.071160 systemd[1]: Started cri-containerd-4a3e80550c9a6f0478620f72b88fb4566b96feb1528a6ce5b439596c0d3ec4e5.scope - libcontainer container 4a3e80550c9a6f0478620f72b88fb4566b96feb1528a6ce5b439596c0d3ec4e5. Jan 29 12:03:35.117079 containerd[1712]: time="2025-01-29T12:03:35.116921791Z" level=info msg="StartContainer for \"4a3e80550c9a6f0478620f72b88fb4566b96feb1528a6ce5b439596c0d3ec4e5\" returns successfully" Jan 29 12:03:35.271851 kubelet[3282]: E0129 12:03:35.271729 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.271851 kubelet[3282]: W0129 12:03:35.271755 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.271851 kubelet[3282]: E0129 12:03:35.271782 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.272166 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275028 kubelet[3282]: W0129 12:03:35.272181 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.272196 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.272853 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275028 kubelet[3282]: W0129 12:03:35.272868 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.272883 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.273195 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275028 kubelet[3282]: W0129 12:03:35.273206 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.273234 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275028 kubelet[3282]: E0129 12:03:35.273432 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275545 kubelet[3282]: W0129 12:03:35.273441 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.275545 kubelet[3282]: E0129 12:03:35.273449 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275545 kubelet[3282]: E0129 12:03:35.273639 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275545 kubelet[3282]: W0129 12:03:35.273647 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.275545 kubelet[3282]: E0129 12:03:35.273655 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275545 kubelet[3282]: E0129 12:03:35.273850 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275545 kubelet[3282]: W0129 12:03:35.273859 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.275545 kubelet[3282]: E0129 12:03:35.273869 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.275545 kubelet[3282]: E0129 12:03:35.274082 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.275545 kubelet[3282]: W0129 12:03:35.274093 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274102 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274294 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276172 kubelet[3282]: W0129 12:03:35.274302 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274310 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274493 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276172 kubelet[3282]: W0129 12:03:35.274502 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274510 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274750 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276172 kubelet[3282]: W0129 12:03:35.274763 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276172 kubelet[3282]: E0129 12:03:35.274775 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275021 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276424 kubelet[3282]: W0129 12:03:35.275033 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275045 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275293 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276424 kubelet[3282]: W0129 12:03:35.275303 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275316 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275542 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276424 kubelet[3282]: W0129 12:03:35.275574 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275587 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.276424 kubelet[3282]: E0129 12:03:35.275963 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.276652 kubelet[3282]: W0129 12:03:35.275975 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.276652 kubelet[3282]: E0129 12:03:35.276038 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.284375 kubelet[3282]: E0129 12:03:35.284360 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.284514 kubelet[3282]: W0129 12:03:35.284455 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.284514 kubelet[3282]: E0129 12:03:35.284470 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.284714 kubelet[3282]: E0129 12:03:35.284696 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.284714 kubelet[3282]: W0129 12:03:35.284711 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.284827 kubelet[3282]: E0129 12:03:35.284729 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.285055 kubelet[3282]: E0129 12:03:35.285038 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.285055 kubelet[3282]: W0129 12:03:35.285052 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.285170 kubelet[3282]: E0129 12:03:35.285070 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.285343 kubelet[3282]: E0129 12:03:35.285327 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.285343 kubelet[3282]: W0129 12:03:35.285339 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.285451 kubelet[3282]: E0129 12:03:35.285368 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.285617 kubelet[3282]: E0129 12:03:35.285601 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.285617 kubelet[3282]: W0129 12:03:35.285613 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.285846 kubelet[3282]: E0129 12:03:35.285632 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.285846 kubelet[3282]: E0129 12:03:35.285813 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.285846 kubelet[3282]: W0129 12:03:35.285825 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.286411 kubelet[3282]: E0129 12:03:35.286208 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.286411 kubelet[3282]: W0129 12:03:35.286223 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.286411 kubelet[3282]: E0129 12:03:35.286302 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.286411 kubelet[3282]: E0129 12:03:35.286369 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.286764 kubelet[3282]: E0129 12:03:35.286638 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.286764 kubelet[3282]: W0129 12:03:35.286651 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.286764 kubelet[3282]: E0129 12:03:35.286669 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.287134 kubelet[3282]: E0129 12:03:35.286891 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.287134 kubelet[3282]: W0129 12:03:35.286901 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.287134 kubelet[3282]: E0129 12:03:35.286919 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.287447 kubelet[3282]: E0129 12:03:35.287362 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.287447 kubelet[3282]: W0129 12:03:35.287373 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.287447 kubelet[3282]: E0129 12:03:35.287393 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.287775 kubelet[3282]: E0129 12:03:35.287712 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.287775 kubelet[3282]: W0129 12:03:35.287723 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.288019 kubelet[3282]: E0129 12:03:35.287859 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.288255 kubelet[3282]: E0129 12:03:35.288237 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.288255 kubelet[3282]: W0129 12:03:35.288254 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.288371 kubelet[3282]: E0129 12:03:35.288349 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.288741 kubelet[3282]: E0129 12:03:35.288627 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.288741 kubelet[3282]: W0129 12:03:35.288640 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.288741 kubelet[3282]: E0129 12:03:35.288727 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.288913 kubelet[3282]: E0129 12:03:35.288854 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.288913 kubelet[3282]: W0129 12:03:35.288863 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.288913 kubelet[3282]: E0129 12:03:35.288887 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.289170 kubelet[3282]: E0129 12:03:35.289152 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.289170 kubelet[3282]: W0129 12:03:35.289166 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.289284 kubelet[3282]: E0129 12:03:35.289190 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.289432 kubelet[3282]: E0129 12:03:35.289414 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.289432 kubelet[3282]: W0129 12:03:35.289429 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.289532 kubelet[3282]: E0129 12:03:35.289442 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.289677 kubelet[3282]: E0129 12:03:35.289660 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.289677 kubelet[3282]: W0129 12:03:35.289672 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.289782 kubelet[3282]: E0129 12:03:35.289684 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:35.290244 kubelet[3282]: E0129 12:03:35.290228 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:35.290244 kubelet[3282]: W0129 12:03:35.290242 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:35.290339 kubelet[3282]: E0129 12:03:35.290256 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.127738 kubelet[3282]: E0129 12:03:36.126974 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:36.211646 kubelet[3282]: I0129 12:03:36.211603 3282 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:03:36.283306 kubelet[3282]: E0129 12:03:36.283269 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.283306 kubelet[3282]: W0129 12:03:36.283297 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.284010 kubelet[3282]: E0129 12:03:36.283325 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.284010 kubelet[3282]: E0129 12:03:36.283783 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.284010 kubelet[3282]: W0129 12:03:36.283799 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.284010 kubelet[3282]: E0129 12:03:36.283816 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.284239 kubelet[3282]: E0129 12:03:36.284086 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.284239 kubelet[3282]: W0129 12:03:36.284099 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.284239 kubelet[3282]: E0129 12:03:36.284114 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.284401 kubelet[3282]: E0129 12:03:36.284337 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.284401 kubelet[3282]: W0129 12:03:36.284347 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.284401 kubelet[3282]: E0129 12:03:36.284360 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.284595 kubelet[3282]: E0129 12:03:36.284579 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.284647 kubelet[3282]: W0129 12:03:36.284594 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.284647 kubelet[3282]: E0129 12:03:36.284627 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.284899 kubelet[3282]: E0129 12:03:36.284878 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.284899 kubelet[3282]: W0129 12:03:36.284892 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.285100 kubelet[3282]: E0129 12:03:36.284906 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.285151 kubelet[3282]: E0129 12:03:36.285141 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.285197 kubelet[3282]: W0129 12:03:36.285152 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.285197 kubelet[3282]: E0129 12:03:36.285166 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.285372 kubelet[3282]: E0129 12:03:36.285350 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.285372 kubelet[3282]: W0129 12:03:36.285368 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.285505 kubelet[3282]: E0129 12:03:36.285380 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.285601 kubelet[3282]: E0129 12:03:36.285586 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.285601 kubelet[3282]: W0129 12:03:36.285598 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.285715 kubelet[3282]: E0129 12:03:36.285611 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.285800 kubelet[3282]: E0129 12:03:36.285786 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.285800 kubelet[3282]: W0129 12:03:36.285797 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.285901 kubelet[3282]: E0129 12:03:36.285810 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.286016 kubelet[3282]: E0129 12:03:36.285999 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.286016 kubelet[3282]: W0129 12:03:36.286013 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286026 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286223 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.287672 kubelet[3282]: W0129 12:03:36.286233 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286241 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286405 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.287672 kubelet[3282]: W0129 12:03:36.286413 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286422 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286586 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.287672 kubelet[3282]: W0129 12:03:36.286594 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.287672 kubelet[3282]: E0129 12:03:36.286602 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.287908 kubelet[3282]: E0129 12:03:36.286763 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.287908 kubelet[3282]: W0129 12:03:36.286771 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.287908 kubelet[3282]: E0129 12:03:36.286779 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.292170 kubelet[3282]: E0129 12:03:36.292086 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.292170 kubelet[3282]: W0129 12:03:36.292102 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.292170 kubelet[3282]: E0129 12:03:36.292117 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.292457 kubelet[3282]: E0129 12:03:36.292439 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.292457 kubelet[3282]: W0129 12:03:36.292453 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.292696 kubelet[3282]: E0129 12:03:36.292473 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.292769 kubelet[3282]: E0129 12:03:36.292716 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.292769 kubelet[3282]: W0129 12:03:36.292729 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.292769 kubelet[3282]: E0129 12:03:36.292756 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.293031 kubelet[3282]: E0129 12:03:36.293019 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.293084 kubelet[3282]: W0129 12:03:36.293033 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.293084 kubelet[3282]: E0129 12:03:36.293051 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.293277 kubelet[3282]: E0129 12:03:36.293266 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.293336 kubelet[3282]: W0129 12:03:36.293282 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.293336 kubelet[3282]: E0129 12:03:36.293299 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.293519 kubelet[3282]: E0129 12:03:36.293506 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.293569 kubelet[3282]: W0129 12:03:36.293522 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.293569 kubelet[3282]: E0129 12:03:36.293535 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.293778 kubelet[3282]: E0129 12:03:36.293760 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.293778 kubelet[3282]: W0129 12:03:36.293774 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.294304 kubelet[3282]: E0129 12:03:36.293836 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.294485 kubelet[3282]: E0129 12:03:36.294469 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.294485 kubelet[3282]: W0129 12:03:36.294482 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.294623 kubelet[3282]: E0129 12:03:36.294544 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.294795 kubelet[3282]: E0129 12:03:36.294780 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.294795 kubelet[3282]: W0129 12:03:36.294792 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.294924 kubelet[3282]: E0129 12:03:36.294852 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.295105 kubelet[3282]: E0129 12:03:36.295090 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.295105 kubelet[3282]: W0129 12:03:36.295101 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.295435 kubelet[3282]: E0129 12:03:36.295120 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.295573 kubelet[3282]: E0129 12:03:36.295555 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.295573 kubelet[3282]: W0129 12:03:36.295570 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.295786 kubelet[3282]: E0129 12:03:36.295588 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.295846 kubelet[3282]: E0129 12:03:36.295793 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.295846 kubelet[3282]: W0129 12:03:36.295804 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.295846 kubelet[3282]: E0129 12:03:36.295821 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.296101 kubelet[3282]: E0129 12:03:36.296072 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.296101 kubelet[3282]: W0129 12:03:36.296083 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.296282 kubelet[3282]: E0129 12:03:36.296188 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.296415 kubelet[3282]: E0129 12:03:36.296401 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.296415 kubelet[3282]: W0129 12:03:36.296413 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.296521 kubelet[3282]: E0129 12:03:36.296440 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.296706 kubelet[3282]: E0129 12:03:36.296687 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.296706 kubelet[3282]: W0129 12:03:36.296704 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.296832 kubelet[3282]: E0129 12:03:36.296724 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.297010 kubelet[3282]: E0129 12:03:36.296966 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.297146 kubelet[3282]: W0129 12:03:36.297014 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.297146 kubelet[3282]: E0129 12:03:36.297036 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.297448 kubelet[3282]: E0129 12:03:36.297430 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.297448 kubelet[3282]: W0129 12:03:36.297444 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.297557 kubelet[3282]: E0129 12:03:36.297463 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.297690 kubelet[3282]: E0129 12:03:36.297676 3282 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 12:03:36.297690 kubelet[3282]: W0129 12:03:36.297688 3282 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 12:03:36.297771 kubelet[3282]: E0129 12:03:36.297701 3282 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 12:03:36.298082 containerd[1712]: time="2025-01-29T12:03:36.298043185Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:36.301973 containerd[1712]: time="2025-01-29T12:03:36.301799782Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 29 12:03:36.305603 containerd[1712]: time="2025-01-29T12:03:36.305548378Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:36.310142 containerd[1712]: time="2025-01-29T12:03:36.310074394Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:36.311316 containerd[1712]: time="2025-01-29T12:03:36.310767612Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.339902079s" Jan 29 12:03:36.311316 containerd[1712]: time="2025-01-29T12:03:36.310820914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 12:03:36.313508 containerd[1712]: time="2025-01-29T12:03:36.313467582Z" level=info msg="CreateContainer within sandbox \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 12:03:36.350189 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount727069385.mount: Deactivated successfully. Jan 29 12:03:36.372162 containerd[1712]: time="2025-01-29T12:03:36.371661279Z" level=info msg="CreateContainer within sandbox \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119\"" Jan 29 12:03:36.373246 containerd[1712]: time="2025-01-29T12:03:36.373211119Z" level=info msg="StartContainer for \"6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119\"" Jan 29 12:03:36.423320 systemd[1]: Started cri-containerd-6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119.scope - libcontainer container 6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119. Jan 29 12:03:36.482255 containerd[1712]: time="2025-01-29T12:03:36.482204424Z" level=info msg="StartContainer for \"6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119\" returns successfully" Jan 29 12:03:36.501642 systemd[1]: cri-containerd-6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119.scope: Deactivated successfully. Jan 29 12:03:36.977865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119-rootfs.mount: Deactivated successfully. Jan 29 12:03:37.231638 kubelet[3282]: I0129 12:03:37.231376 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-796d9d8db6-q62vf" podStartSLOduration=3.122828724 podStartE2EDuration="5.231350301s" podCreationTimestamp="2025-01-29 12:03:32 +0000 UTC" firstStartedPulling="2025-01-29 12:03:32.86094202 +0000 UTC m=+21.837861302" lastFinishedPulling="2025-01-29 12:03:34.969463597 +0000 UTC m=+23.946382879" observedRunningTime="2025-01-29 12:03:35.242042411 +0000 UTC m=+24.218961693" watchObservedRunningTime="2025-01-29 12:03:37.231350301 +0000 UTC m=+26.208269583" Jan 29 12:03:37.848500 containerd[1712]: time="2025-01-29T12:03:37.848410080Z" level=info msg="shim disconnected" id=6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119 namespace=k8s.io Jan 29 12:03:37.848500 containerd[1712]: time="2025-01-29T12:03:37.848494382Z" level=warning msg="cleaning up after shim disconnected" id=6e87a5002d99290c2e8f3f5c77dd2035a9f16937e180937e2fa8d7a317744119 namespace=k8s.io Jan 29 12:03:37.848500 containerd[1712]: time="2025-01-29T12:03:37.848507082Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:37.862514 containerd[1712]: time="2025-01-29T12:03:37.862450341Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:03:37Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:03:38.127076 kubelet[3282]: E0129 12:03:38.126713 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:38.221567 containerd[1712]: time="2025-01-29T12:03:38.221497451Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 12:03:40.128006 kubelet[3282]: E0129 12:03:40.127330 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:42.126755 kubelet[3282]: E0129 12:03:42.126703 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:42.849511 containerd[1712]: time="2025-01-29T12:03:42.849446743Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:42.852547 containerd[1712]: time="2025-01-29T12:03:42.852460819Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 12:03:42.856473 containerd[1712]: time="2025-01-29T12:03:42.856400718Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:42.861334 containerd[1712]: time="2025-01-29T12:03:42.861262740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:42.863841 containerd[1712]: time="2025-01-29T12:03:42.862660575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.640720813s" Jan 29 12:03:42.863841 containerd[1712]: time="2025-01-29T12:03:42.862717976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 12:03:42.870748 containerd[1712]: time="2025-01-29T12:03:42.870701677Z" level=info msg="CreateContainer within sandbox \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 12:03:42.919268 containerd[1712]: time="2025-01-29T12:03:42.919214296Z" level=info msg="CreateContainer within sandbox \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae\"" Jan 29 12:03:42.920044 containerd[1712]: time="2025-01-29T12:03:42.919904613Z" level=info msg="StartContainer for \"19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae\"" Jan 29 12:03:42.958830 systemd[1]: run-containerd-runc-k8s.io-19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae-runc.whTv2P.mount: Deactivated successfully. Jan 29 12:03:42.970227 systemd[1]: Started cri-containerd-19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae.scope - libcontainer container 19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae. Jan 29 12:03:43.008714 containerd[1712]: time="2025-01-29T12:03:43.008357136Z" level=info msg="StartContainer for \"19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae\" returns successfully" Jan 29 12:03:44.127634 kubelet[3282]: E0129 12:03:44.127562 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:44.504582 containerd[1712]: time="2025-01-29T12:03:44.504439930Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 12:03:44.506972 systemd[1]: cri-containerd-19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae.scope: Deactivated successfully. Jan 29 12:03:44.533778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae-rootfs.mount: Deactivated successfully. Jan 29 12:03:44.575044 kubelet[3282]: I0129 12:03:44.574557 3282 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.612735 3282 topology_manager.go:215] "Topology Admit Handler" podUID="20cf8bd9-7e52-4094-8e72-0357f70114de" podNamespace="calico-system" podName="calico-kube-controllers-54d96776db-zgpqq" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.619822 3282 topology_manager.go:215] "Topology Admit Handler" podUID="ad2047c6-04db-4422-b1e5-5b03f71d15f2" podNamespace="calico-apiserver" podName="calico-apiserver-559dc9496c-vw9l2" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.620067 3282 topology_manager.go:215] "Topology Admit Handler" podUID="fb070daf-6fcc-4f94-819c-4f946e1c33fb" podNamespace="kube-system" podName="coredns-7db6d8ff4d-x2x7f" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.623849 3282 topology_manager.go:215] "Topology Admit Handler" podUID="aa7a0ece-20eb-47fa-a309-d56e36ab93b3" podNamespace="calico-apiserver" podName="calico-apiserver-559dc9496c-djhw8" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.629027 3282 topology_manager.go:215] "Topology Admit Handler" podUID="ad7842c6-0124-41f1-be81-515378bf6b06" podNamespace="kube-system" podName="coredns-7db6d8ff4d-dvzbx" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.748173 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvtdz\" (UniqueName: \"kubernetes.io/projected/20cf8bd9-7e52-4094-8e72-0357f70114de-kube-api-access-jvtdz\") pod \"calico-kube-controllers-54d96776db-zgpqq\" (UID: \"20cf8bd9-7e52-4094-8e72-0357f70114de\") " pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" Jan 29 12:03:45.029077 kubelet[3282]: I0129 12:03:44.748280 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aa7a0ece-20eb-47fa-a309-d56e36ab93b3-calico-apiserver-certs\") pod \"calico-apiserver-559dc9496c-djhw8\" (UID: \"aa7a0ece-20eb-47fa-a309-d56e36ab93b3\") " pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" Jan 29 12:03:44.627652 systemd[1]: Created slice kubepods-besteffort-pod20cf8bd9_7e52_4094_8e72_0357f70114de.slice - libcontainer container kubepods-besteffort-pod20cf8bd9_7e52_4094_8e72_0357f70114de.slice. Jan 29 12:03:45.029657 kubelet[3282]: I0129 12:03:44.748317 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xt86j\" (UniqueName: \"kubernetes.io/projected/aa7a0ece-20eb-47fa-a309-d56e36ab93b3-kube-api-access-xt86j\") pod \"calico-apiserver-559dc9496c-djhw8\" (UID: \"aa7a0ece-20eb-47fa-a309-d56e36ab93b3\") " pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" Jan 29 12:03:45.029657 kubelet[3282]: I0129 12:03:44.748342 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ad2047c6-04db-4422-b1e5-5b03f71d15f2-calico-apiserver-certs\") pod \"calico-apiserver-559dc9496c-vw9l2\" (UID: \"ad2047c6-04db-4422-b1e5-5b03f71d15f2\") " pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" Jan 29 12:03:45.029657 kubelet[3282]: I0129 12:03:44.748376 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4t2k\" (UniqueName: \"kubernetes.io/projected/fb070daf-6fcc-4f94-819c-4f946e1c33fb-kube-api-access-n4t2k\") pod \"coredns-7db6d8ff4d-x2x7f\" (UID: \"fb070daf-6fcc-4f94-819c-4f946e1c33fb\") " pod="kube-system/coredns-7db6d8ff4d-x2x7f" Jan 29 12:03:45.029657 kubelet[3282]: I0129 12:03:44.748408 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ad7842c6-0124-41f1-be81-515378bf6b06-config-volume\") pod \"coredns-7db6d8ff4d-dvzbx\" (UID: \"ad7842c6-0124-41f1-be81-515378bf6b06\") " pod="kube-system/coredns-7db6d8ff4d-dvzbx" Jan 29 12:03:45.029657 kubelet[3282]: I0129 12:03:44.748428 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gc6r\" (UniqueName: \"kubernetes.io/projected/ad2047c6-04db-4422-b1e5-5b03f71d15f2-kube-api-access-9gc6r\") pod \"calico-apiserver-559dc9496c-vw9l2\" (UID: \"ad2047c6-04db-4422-b1e5-5b03f71d15f2\") " pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" Jan 29 12:03:44.639953 systemd[1]: Created slice kubepods-burstable-podfb070daf_6fcc_4f94_819c_4f946e1c33fb.slice - libcontainer container kubepods-burstable-podfb070daf_6fcc_4f94_819c_4f946e1c33fb.slice. Jan 29 12:03:45.030000 kubelet[3282]: I0129 12:03:44.748461 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/20cf8bd9-7e52-4094-8e72-0357f70114de-tigera-ca-bundle\") pod \"calico-kube-controllers-54d96776db-zgpqq\" (UID: \"20cf8bd9-7e52-4094-8e72-0357f70114de\") " pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" Jan 29 12:03:45.030000 kubelet[3282]: I0129 12:03:44.748488 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgmxz\" (UniqueName: \"kubernetes.io/projected/ad7842c6-0124-41f1-be81-515378bf6b06-kube-api-access-xgmxz\") pod \"coredns-7db6d8ff4d-dvzbx\" (UID: \"ad7842c6-0124-41f1-be81-515378bf6b06\") " pod="kube-system/coredns-7db6d8ff4d-dvzbx" Jan 29 12:03:45.030000 kubelet[3282]: I0129 12:03:44.748527 3282 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fb070daf-6fcc-4f94-819c-4f946e1c33fb-config-volume\") pod \"coredns-7db6d8ff4d-x2x7f\" (UID: \"fb070daf-6fcc-4f94-819c-4f946e1c33fb\") " pod="kube-system/coredns-7db6d8ff4d-x2x7f" Jan 29 12:03:44.653388 systemd[1]: Created slice kubepods-besteffort-podad2047c6_04db_4422_b1e5_5b03f71d15f2.slice - libcontainer container kubepods-besteffort-podad2047c6_04db_4422_b1e5_5b03f71d15f2.slice. Jan 29 12:03:44.662356 systemd[1]: Created slice kubepods-burstable-podad7842c6_0124_41f1_be81_515378bf6b06.slice - libcontainer container kubepods-burstable-podad7842c6_0124_41f1_be81_515378bf6b06.slice. Jan 29 12:03:44.669909 systemd[1]: Created slice kubepods-besteffort-podaa7a0ece_20eb_47fa_a309_d56e36ab93b3.slice - libcontainer container kubepods-besteffort-podaa7a0ece_20eb_47fa_a309_d56e36ab93b3.slice. Jan 29 12:03:45.331120 containerd[1712]: time="2025-01-29T12:03:45.331061301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d96776db-zgpqq,Uid:20cf8bd9-7e52-4094-8e72-0357f70114de,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:45.337997 containerd[1712]: time="2025-01-29T12:03:45.337937274Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-vw9l2,Uid:ad2047c6-04db-4422-b1e5-5b03f71d15f2,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:03:45.338334 containerd[1712]: time="2025-01-29T12:03:45.338303683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dvzbx,Uid:ad7842c6-0124-41f1-be81-515378bf6b06,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:45.342510 containerd[1712]: time="2025-01-29T12:03:45.342465488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-djhw8,Uid:aa7a0ece-20eb-47fa-a309-d56e36ab93b3,Namespace:calico-apiserver,Attempt:0,}" Jan 29 12:03:45.372848 containerd[1712]: time="2025-01-29T12:03:45.372787050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x2x7f,Uid:fb070daf-6fcc-4f94-819c-4f946e1c33fb,Namespace:kube-system,Attempt:0,}" Jan 29 12:03:46.133698 systemd[1]: Created slice kubepods-besteffort-pod71138072_0e15_4069_b62b_58fc03bf5cf2.slice - libcontainer container kubepods-besteffort-pod71138072_0e15_4069_b62b_58fc03bf5cf2.slice. Jan 29 12:03:46.136096 containerd[1712]: time="2025-01-29T12:03:46.136047790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2ffd,Uid:71138072-0e15-4069-b62b-58fc03bf5cf2,Namespace:calico-system,Attempt:0,}" Jan 29 12:03:46.156864 containerd[1712]: time="2025-01-29T12:03:46.156779831Z" level=info msg="shim disconnected" id=19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae namespace=k8s.io Jan 29 12:03:46.156864 containerd[1712]: time="2025-01-29T12:03:46.156844932Z" level=warning msg="cleaning up after shim disconnected" id=19b8f6be50f1f253f27958fd345edcf40e3f2ce9b6efac118f9fcc403abe20ae namespace=k8s.io Jan 29 12:03:46.156864 containerd[1712]: time="2025-01-29T12:03:46.156858032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 12:03:46.172266 containerd[1712]: time="2025-01-29T12:03:46.172204484Z" level=warning msg="cleanup warnings time=\"2025-01-29T12:03:46Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 29 12:03:46.240533 containerd[1712]: time="2025-01-29T12:03:46.240293802Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 12:03:46.475292 containerd[1712]: time="2025-01-29T12:03:46.474826953Z" level=error msg="Failed to destroy network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.476328 containerd[1712]: time="2025-01-29T12:03:46.476278177Z" level=error msg="encountered an error cleaning up failed sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.477537 containerd[1712]: time="2025-01-29T12:03:46.477173492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d96776db-zgpqq,Uid:20cf8bd9-7e52-4094-8e72-0357f70114de,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.477685 kubelet[3282]: E0129 12:03:46.477465 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.478077 kubelet[3282]: E0129 12:03:46.477718 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" Jan 29 12:03:46.478077 kubelet[3282]: E0129 12:03:46.477753 3282 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" Jan 29 12:03:46.478077 kubelet[3282]: E0129 12:03:46.477831 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54d96776db-zgpqq_calico-system(20cf8bd9-7e52-4094-8e72-0357f70114de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54d96776db-zgpqq_calico-system(20cf8bd9-7e52-4094-8e72-0357f70114de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" podUID="20cf8bd9-7e52-4094-8e72-0357f70114de" Jan 29 12:03:46.521931 containerd[1712]: time="2025-01-29T12:03:46.521298117Z" level=error msg="Failed to destroy network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.522861 containerd[1712]: time="2025-01-29T12:03:46.522594938Z" level=error msg="encountered an error cleaning up failed sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.523293 containerd[1712]: time="2025-01-29T12:03:46.523148747Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dvzbx,Uid:ad7842c6-0124-41f1-be81-515378bf6b06,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.523984 kubelet[3282]: E0129 12:03:46.523925 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.524112 kubelet[3282]: E0129 12:03:46.524058 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dvzbx" Jan 29 12:03:46.524112 kubelet[3282]: E0129 12:03:46.524093 3282 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-dvzbx" Jan 29 12:03:46.525518 kubelet[3282]: E0129 12:03:46.524446 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-dvzbx_kube-system(ad7842c6-0124-41f1-be81-515378bf6b06)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-dvzbx_kube-system(ad7842c6-0124-41f1-be81-515378bf6b06)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dvzbx" podUID="ad7842c6-0124-41f1-be81-515378bf6b06" Jan 29 12:03:46.571177 containerd[1712]: time="2025-01-29T12:03:46.571117435Z" level=error msg="Failed to destroy network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.575430 containerd[1712]: time="2025-01-29T12:03:46.575292203Z" level=error msg="encountered an error cleaning up failed sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.575430 containerd[1712]: time="2025-01-29T12:03:46.575373105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x2x7f,Uid:fb070daf-6fcc-4f94-819c-4f946e1c33fb,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.575865 kubelet[3282]: E0129 12:03:46.575757 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.575865 kubelet[3282]: E0129 12:03:46.575833 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x2x7f" Jan 29 12:03:46.575865 kubelet[3282]: E0129 12:03:46.575865 3282 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-x2x7f" Jan 29 12:03:46.576257 kubelet[3282]: E0129 12:03:46.575924 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-x2x7f_kube-system(fb070daf-6fcc-4f94-819c-4f946e1c33fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-x2x7f_kube-system(fb070daf-6fcc-4f94-819c-4f946e1c33fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x2x7f" podUID="fb070daf-6fcc-4f94-819c-4f946e1c33fb" Jan 29 12:03:46.576817 containerd[1712]: time="2025-01-29T12:03:46.576504623Z" level=error msg="Failed to destroy network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.579706 containerd[1712]: time="2025-01-29T12:03:46.579654875Z" level=error msg="encountered an error cleaning up failed sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.580242 containerd[1712]: time="2025-01-29T12:03:46.580035881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2ffd,Uid:71138072-0e15-4069-b62b-58fc03bf5cf2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.580804 kubelet[3282]: E0129 12:03:46.580734 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.580804 kubelet[3282]: E0129 12:03:46.580795 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:46.580931 kubelet[3282]: E0129 12:03:46.580821 3282 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-s2ffd" Jan 29 12:03:46.580931 kubelet[3282]: E0129 12:03:46.580874 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-s2ffd_calico-system(71138072-0e15-4069-b62b-58fc03bf5cf2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-s2ffd_calico-system(71138072-0e15-4069-b62b-58fc03bf5cf2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:46.586833 containerd[1712]: time="2025-01-29T12:03:46.586113981Z" level=error msg="Failed to destroy network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.587213 containerd[1712]: time="2025-01-29T12:03:46.587179398Z" level=error msg="encountered an error cleaning up failed sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.587368 containerd[1712]: time="2025-01-29T12:03:46.587335901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-djhw8,Uid:aa7a0ece-20eb-47fa-a309-d56e36ab93b3,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.588564 kubelet[3282]: E0129 12:03:46.588525 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.588678 kubelet[3282]: E0129 12:03:46.588587 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" Jan 29 12:03:46.588678 kubelet[3282]: E0129 12:03:46.588619 3282 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" Jan 29 12:03:46.588796 kubelet[3282]: E0129 12:03:46.588675 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-559dc9496c-djhw8_calico-apiserver(aa7a0ece-20eb-47fa-a309-d56e36ab93b3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-559dc9496c-djhw8_calico-apiserver(aa7a0ece-20eb-47fa-a309-d56e36ab93b3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" podUID="aa7a0ece-20eb-47fa-a309-d56e36ab93b3" Jan 29 12:03:46.589548 containerd[1712]: time="2025-01-29T12:03:46.589501137Z" level=error msg="Failed to destroy network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.590089 containerd[1712]: time="2025-01-29T12:03:46.590044346Z" level=error msg="encountered an error cleaning up failed sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.590772 containerd[1712]: time="2025-01-29T12:03:46.590224148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-vw9l2,Uid:ad2047c6-04db-4422-b1e5-5b03f71d15f2,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.590888 kubelet[3282]: E0129 12:03:46.590502 3282 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:46.590888 kubelet[3282]: E0129 12:03:46.590570 3282 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" Jan 29 12:03:46.590888 kubelet[3282]: E0129 12:03:46.590594 3282 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" Jan 29 12:03:46.591260 kubelet[3282]: E0129 12:03:46.590636 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-559dc9496c-vw9l2_calico-apiserver(ad2047c6-04db-4422-b1e5-5b03f71d15f2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-559dc9496c-vw9l2_calico-apiserver(ad2047c6-04db-4422-b1e5-5b03f71d15f2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" podUID="ad2047c6-04db-4422-b1e5-5b03f71d15f2" Jan 29 12:03:47.243046 kubelet[3282]: I0129 12:03:47.243004 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:03:47.246151 containerd[1712]: time="2025-01-29T12:03:47.245600811Z" level=info msg="StopPodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\"" Jan 29 12:03:47.246151 containerd[1712]: time="2025-01-29T12:03:47.245854715Z" level=info msg="Ensure that sandbox 2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133 in task-service has been cleanup successfully" Jan 29 12:03:47.248007 kubelet[3282]: I0129 12:03:47.246877 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:03:47.248966 containerd[1712]: time="2025-01-29T12:03:47.247749146Z" level=info msg="StopPodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\"" Jan 29 12:03:47.249696 containerd[1712]: time="2025-01-29T12:03:47.249654577Z" level=info msg="Ensure that sandbox 3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f in task-service has been cleanup successfully" Jan 29 12:03:47.251677 kubelet[3282]: I0129 12:03:47.251641 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:03:47.253202 containerd[1712]: time="2025-01-29T12:03:47.252758828Z" level=info msg="StopPodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\"" Jan 29 12:03:47.253741 containerd[1712]: time="2025-01-29T12:03:47.253710744Z" level=info msg="Ensure that sandbox a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42 in task-service has been cleanup successfully" Jan 29 12:03:47.260652 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f-shm.mount: Deactivated successfully. Jan 29 12:03:47.260781 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72-shm.mount: Deactivated successfully. Jan 29 12:03:47.260872 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42-shm.mount: Deactivated successfully. Jan 29 12:03:47.260962 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e-shm.mount: Deactivated successfully. Jan 29 12:03:47.269515 kubelet[3282]: I0129 12:03:47.269475 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:03:47.272272 containerd[1712]: time="2025-01-29T12:03:47.272237048Z" level=info msg="StopPodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\"" Jan 29 12:03:47.272515 containerd[1712]: time="2025-01-29T12:03:47.272485752Z" level=info msg="Ensure that sandbox 461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e in task-service has been cleanup successfully" Jan 29 12:03:47.297845 kubelet[3282]: I0129 12:03:47.297764 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:03:47.301091 containerd[1712]: time="2025-01-29T12:03:47.300959320Z" level=info msg="StopPodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\"" Jan 29 12:03:47.301341 containerd[1712]: time="2025-01-29T12:03:47.301246624Z" level=info msg="Ensure that sandbox 7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768 in task-service has been cleanup successfully" Jan 29 12:03:47.305366 kubelet[3282]: I0129 12:03:47.305300 3282 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:03:47.306905 containerd[1712]: time="2025-01-29T12:03:47.306856516Z" level=info msg="StopPodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\"" Jan 29 12:03:47.308271 containerd[1712]: time="2025-01-29T12:03:47.307280223Z" level=info msg="Ensure that sandbox b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72 in task-service has been cleanup successfully" Jan 29 12:03:47.395182 containerd[1712]: time="2025-01-29T12:03:47.393158734Z" level=error msg="StopPodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" failed" error="failed to destroy network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:47.395354 kubelet[3282]: E0129 12:03:47.393462 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:03:47.395736 kubelet[3282]: E0129 12:03:47.393976 3282 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f"} Jan 29 12:03:47.395933 kubelet[3282]: E0129 12:03:47.395909 3282 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fb070daf-6fcc-4f94-819c-4f946e1c33fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:47.398172 kubelet[3282]: E0129 12:03:47.397358 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fb070daf-6fcc-4f94-819c-4f946e1c33fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-x2x7f" podUID="fb070daf-6fcc-4f94-819c-4f946e1c33fb" Jan 29 12:03:47.431131 containerd[1712]: time="2025-01-29T12:03:47.431060456Z" level=error msg="StopPodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" failed" error="failed to destroy network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:47.431612 kubelet[3282]: E0129 12:03:47.431556 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:03:47.431731 kubelet[3282]: E0129 12:03:47.431631 3282 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42"} Jan 29 12:03:47.431731 kubelet[3282]: E0129 12:03:47.431681 3282 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad2047c6-04db-4422-b1e5-5b03f71d15f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:47.431731 kubelet[3282]: E0129 12:03:47.431715 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad2047c6-04db-4422-b1e5-5b03f71d15f2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" podUID="ad2047c6-04db-4422-b1e5-5b03f71d15f2" Jan 29 12:03:47.433748 containerd[1712]: time="2025-01-29T12:03:47.433682399Z" level=error msg="StopPodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" failed" error="failed to destroy network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:47.434502 kubelet[3282]: E0129 12:03:47.434029 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:03:47.434502 kubelet[3282]: E0129 12:03:47.434097 3282 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133"} Jan 29 12:03:47.434502 kubelet[3282]: E0129 12:03:47.434141 3282 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71138072-0e15-4069-b62b-58fc03bf5cf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:47.434502 kubelet[3282]: E0129 12:03:47.434175 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71138072-0e15-4069-b62b-58fc03bf5cf2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-s2ffd" podUID="71138072-0e15-4069-b62b-58fc03bf5cf2" Jan 29 12:03:47.435449 kubelet[3282]: E0129 12:03:47.435333 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:03:47.435449 kubelet[3282]: E0129 12:03:47.435380 3282 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e"} Jan 29 12:03:47.435449 kubelet[3282]: E0129 12:03:47.435433 3282 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20cf8bd9-7e52-4094-8e72-0357f70114de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:47.436096 containerd[1712]: time="2025-01-29T12:03:47.435130523Z" level=error msg="StopPodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" failed" error="failed to destroy network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:47.436185 kubelet[3282]: E0129 12:03:47.435465 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20cf8bd9-7e52-4094-8e72-0357f70114de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" podUID="20cf8bd9-7e52-4094-8e72-0357f70114de" Jan 29 12:03:47.450661 containerd[1712]: time="2025-01-29T12:03:47.450570376Z" level=error msg="StopPodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" failed" error="failed to destroy network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:47.451303 kubelet[3282]: E0129 12:03:47.451096 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:03:47.451303 kubelet[3282]: E0129 12:03:47.451159 3282 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768"} Jan 29 12:03:47.451303 kubelet[3282]: E0129 12:03:47.451203 3282 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"aa7a0ece-20eb-47fa-a309-d56e36ab93b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:47.451303 kubelet[3282]: E0129 12:03:47.451242 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"aa7a0ece-20eb-47fa-a309-d56e36ab93b3\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" podUID="aa7a0ece-20eb-47fa-a309-d56e36ab93b3" Jan 29 12:03:47.457975 containerd[1712]: time="2025-01-29T12:03:47.457917097Z" level=error msg="StopPodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" failed" error="failed to destroy network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 12:03:47.458605 kubelet[3282]: E0129 12:03:47.458404 3282 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:03:47.458605 kubelet[3282]: E0129 12:03:47.458468 3282 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72"} Jan 29 12:03:47.458605 kubelet[3282]: E0129 12:03:47.458518 3282 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ad7842c6-0124-41f1-be81-515378bf6b06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 29 12:03:47.458605 kubelet[3282]: E0129 12:03:47.458553 3282 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ad7842c6-0124-41f1-be81-515378bf6b06\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-dvzbx" podUID="ad7842c6-0124-41f1-be81-515378bf6b06" Jan 29 12:03:52.460737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1603745071.mount: Deactivated successfully. Jan 29 12:03:52.522203 containerd[1712]: time="2025-01-29T12:03:52.522138047Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:52.526667 containerd[1712]: time="2025-01-29T12:03:52.526574459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 12:03:52.530764 containerd[1712]: time="2025-01-29T12:03:52.530684362Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:52.535776 containerd[1712]: time="2025-01-29T12:03:52.535697789Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:03:52.536888 containerd[1712]: time="2025-01-29T12:03:52.536382406Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.296037103s" Jan 29 12:03:52.536888 containerd[1712]: time="2025-01-29T12:03:52.536434408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 12:03:52.556089 containerd[1712]: time="2025-01-29T12:03:52.556032502Z" level=info msg="CreateContainer within sandbox \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 12:03:52.611385 containerd[1712]: time="2025-01-29T12:03:52.611326497Z" level=info msg="CreateContainer within sandbox \"792431e3b11af0b67e0fad66a4710ef4e06ba5b5612fd808bbda925f6d7e0c47\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d7cfd09a717746f2194d1e7d3e3eebf57038ec56105a02ec9d4c70481e83036e\"" Jan 29 12:03:52.612209 containerd[1712]: time="2025-01-29T12:03:52.612154018Z" level=info msg="StartContainer for \"d7cfd09a717746f2194d1e7d3e3eebf57038ec56105a02ec9d4c70481e83036e\"" Jan 29 12:03:52.644236 systemd[1]: Started cri-containerd-d7cfd09a717746f2194d1e7d3e3eebf57038ec56105a02ec9d4c70481e83036e.scope - libcontainer container d7cfd09a717746f2194d1e7d3e3eebf57038ec56105a02ec9d4c70481e83036e. Jan 29 12:03:52.686387 containerd[1712]: time="2025-01-29T12:03:52.686116285Z" level=info msg="StartContainer for \"d7cfd09a717746f2194d1e7d3e3eebf57038ec56105a02ec9d4c70481e83036e\" returns successfully" Jan 29 12:03:52.890172 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 12:03:52.890355 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 12:03:53.350455 kubelet[3282]: I0129 12:03:53.350377 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-nxb6k" podStartSLOduration=1.720010745 podStartE2EDuration="21.350352746s" podCreationTimestamp="2025-01-29 12:03:32 +0000 UTC" firstStartedPulling="2025-01-29 12:03:32.906966829 +0000 UTC m=+21.883886211" lastFinishedPulling="2025-01-29 12:03:52.53730883 +0000 UTC m=+41.514228212" observedRunningTime="2025-01-29 12:03:53.349743531 +0000 UTC m=+42.326662913" watchObservedRunningTime="2025-01-29 12:03:53.350352746 +0000 UTC m=+42.327272128" Jan 29 12:03:59.130180 containerd[1712]: time="2025-01-29T12:03:59.129973996Z" level=info msg="StopPodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\"" Jan 29 12:03:59.132330 containerd[1712]: time="2025-01-29T12:03:59.131107723Z" level=info msg="StopPodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\"" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.204 [INFO][4660] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.205 [INFO][4660] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" iface="eth0" netns="/var/run/netns/cni-3f9202a3-be83-df78-95bb-b3cc64a19174" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.206 [INFO][4660] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" iface="eth0" netns="/var/run/netns/cni-3f9202a3-be83-df78-95bb-b3cc64a19174" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.207 [INFO][4660] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" iface="eth0" netns="/var/run/netns/cni-3f9202a3-be83-df78-95bb-b3cc64a19174" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.207 [INFO][4660] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.207 [INFO][4660] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.234 [INFO][4680] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.234 [INFO][4680] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.234 [INFO][4680] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.241 [WARNING][4680] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.241 [INFO][4680] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.242 [INFO][4680] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:59.252728 containerd[1712]: 2025-01-29 12:03:59.248 [INFO][4660] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:03:59.254219 containerd[1712]: time="2025-01-29T12:03:59.254134362Z" level=info msg="TearDown network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" successfully" Jan 29 12:03:59.254219 containerd[1712]: time="2025-01-29T12:03:59.254172263Z" level=info msg="StopPodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" returns successfully" Jan 29 12:03:59.256748 containerd[1712]: time="2025-01-29T12:03:59.256252411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d96776db-zgpqq,Uid:20cf8bd9-7e52-4094-8e72-0357f70114de,Namespace:calico-system,Attempt:1,}" Jan 29 12:03:59.258358 systemd[1]: run-netns-cni\x2d3f9202a3\x2dbe83\x2ddf78\x2d95bb\x2db3cc64a19174.mount: Deactivated successfully. Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.201 [INFO][4671] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.201 [INFO][4671] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" iface="eth0" netns="/var/run/netns/cni-8e66c48c-69a9-f897-914c-52f48e25bb5b" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.202 [INFO][4671] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" iface="eth0" netns="/var/run/netns/cni-8e66c48c-69a9-f897-914c-52f48e25bb5b" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.202 [INFO][4671] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" iface="eth0" netns="/var/run/netns/cni-8e66c48c-69a9-f897-914c-52f48e25bb5b" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.202 [INFO][4671] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.202 [INFO][4671] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.235 [INFO][4679] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.236 [INFO][4679] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.243 [INFO][4679] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.252 [WARNING][4679] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.252 [INFO][4679] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.259 [INFO][4679] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:59.265016 containerd[1712]: 2025-01-29 12:03:59.261 [INFO][4671] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:03:59.265439 containerd[1712]: time="2025-01-29T12:03:59.265406622Z" level=info msg="TearDown network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" successfully" Jan 29 12:03:59.265497 containerd[1712]: time="2025-01-29T12:03:59.265433923Z" level=info msg="StopPodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" returns successfully" Jan 29 12:03:59.266629 containerd[1712]: time="2025-01-29T12:03:59.266209041Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dvzbx,Uid:ad7842c6-0124-41f1-be81-515378bf6b06,Namespace:kube-system,Attempt:1,}" Jan 29 12:03:59.268462 systemd[1]: run-netns-cni\x2d8e66c48c\x2d69a9\x2df897\x2d914c\x2d52f48e25bb5b.mount: Deactivated successfully. Jan 29 12:03:59.486766 systemd-networkd[1579]: cali29ed562aa6a: Link UP Jan 29 12:03:59.489315 systemd-networkd[1579]: cali29ed562aa6a: Gained carrier Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.355 [INFO][4693] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.377 [INFO][4693] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0 calico-kube-controllers-54d96776db- calico-system 20cf8bd9-7e52-4094-8e72-0357f70114de 748 0 2025-01-29 12:03:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54d96776db projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-76e05e3785 calico-kube-controllers-54d96776db-zgpqq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali29ed562aa6a [] []}} ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.377 [INFO][4693] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.421 [INFO][4713] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" HandleID="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.433 [INFO][4713] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" HandleID="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00027e330), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-76e05e3785", "pod":"calico-kube-controllers-54d96776db-zgpqq", "timestamp":"2025-01-29 12:03:59.421663128 +0000 UTC"}, Hostname:"ci-4081.3.0-a-76e05e3785", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.433 [INFO][4713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.434 [INFO][4713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.434 [INFO][4713] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-76e05e3785' Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.437 [INFO][4713] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.441 [INFO][4713] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.445 [INFO][4713] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.447 [INFO][4713] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.449 [INFO][4713] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.449 [INFO][4713] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.453 [INFO][4713] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.459 [INFO][4713] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.467 [INFO][4713] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.129/26] block=192.168.106.128/26 handle="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.467 [INFO][4713] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.129/26] handle="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.467 [INFO][4713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:59.518662 containerd[1712]: 2025-01-29 12:03:59.467 [INFO][4713] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.129/26] IPv6=[] ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" HandleID="k8s-pod-network.b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.521845 containerd[1712]: 2025-01-29 12:03:59.470 [INFO][4693] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0", GenerateName:"calico-kube-controllers-54d96776db-", Namespace:"calico-system", SelfLink:"", UID:"20cf8bd9-7e52-4094-8e72-0357f70114de", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d96776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"", Pod:"calico-kube-controllers-54d96776db-zgpqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29ed562aa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:59.521845 containerd[1712]: 2025-01-29 12:03:59.470 [INFO][4693] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.129/32] ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.521845 containerd[1712]: 2025-01-29 12:03:59.470 [INFO][4693] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali29ed562aa6a ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.521845 containerd[1712]: 2025-01-29 12:03:59.487 [INFO][4693] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.521845 containerd[1712]: 2025-01-29 12:03:59.489 [INFO][4693] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0", GenerateName:"calico-kube-controllers-54d96776db-", Namespace:"calico-system", SelfLink:"", UID:"20cf8bd9-7e52-4094-8e72-0357f70114de", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d96776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b", Pod:"calico-kube-controllers-54d96776db-zgpqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29ed562aa6a", MAC:"46:69:44:15:36:79", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:59.521845 containerd[1712]: 2025-01-29 12:03:59.516 [INFO][4693] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b" Namespace="calico-system" Pod="calico-kube-controllers-54d96776db-zgpqq" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:03:59.523371 systemd-networkd[1579]: cali598963082d4: Link UP Jan 29 12:03:59.523710 systemd-networkd[1579]: cali598963082d4: Gained carrier Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.392 [INFO][4703] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.406 [INFO][4703] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0 coredns-7db6d8ff4d- kube-system ad7842c6-0124-41f1-be81-515378bf6b06 747 0 2025-01-29 12:03:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-76e05e3785 coredns-7db6d8ff4d-dvzbx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali598963082d4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.406 [INFO][4703] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.453 [INFO][4722] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" HandleID="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.466 [INFO][4722] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" HandleID="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba5e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-76e05e3785", "pod":"coredns-7db6d8ff4d-dvzbx", "timestamp":"2025-01-29 12:03:59.453739669 +0000 UTC"}, Hostname:"ci-4081.3.0-a-76e05e3785", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.466 [INFO][4722] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.467 [INFO][4722] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.467 [INFO][4722] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-76e05e3785' Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.469 [INFO][4722] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.474 [INFO][4722] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.478 [INFO][4722] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.481 [INFO][4722] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.484 [INFO][4722] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.484 [INFO][4722] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.486 [INFO][4722] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.496 [INFO][4722] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.513 [INFO][4722] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.130/26] block=192.168.106.128/26 handle="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.514 [INFO][4722] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.130/26] handle="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.514 [INFO][4722] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:03:59.547634 containerd[1712]: 2025-01-29 12:03:59.514 [INFO][4722] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.130/26] IPv6=[] ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" HandleID="k8s-pod-network.8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.548610 containerd[1712]: 2025-01-29 12:03:59.518 [INFO][4703] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad7842c6-0124-41f1-be81-515378bf6b06", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"", Pod:"coredns-7db6d8ff4d-dvzbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali598963082d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:59.548610 containerd[1712]: 2025-01-29 12:03:59.518 [INFO][4703] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.130/32] ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.548610 containerd[1712]: 2025-01-29 12:03:59.518 [INFO][4703] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali598963082d4 ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.548610 containerd[1712]: 2025-01-29 12:03:59.523 [INFO][4703] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.548610 containerd[1712]: 2025-01-29 12:03:59.525 [INFO][4703] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad7842c6-0124-41f1-be81-515378bf6b06", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc", Pod:"coredns-7db6d8ff4d-dvzbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali598963082d4", MAC:"5e:39:5c:e2:82:36", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:03:59.548610 containerd[1712]: 2025-01-29 12:03:59.542 [INFO][4703] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc" Namespace="kube-system" Pod="coredns-7db6d8ff4d-dvzbx" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:03:59.565943 containerd[1712]: time="2025-01-29T12:03:59.564056415Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:59.565943 containerd[1712]: time="2025-01-29T12:03:59.564847333Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:59.565943 containerd[1712]: time="2025-01-29T12:03:59.565088439Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.565943 containerd[1712]: time="2025-01-29T12:03:59.565342944Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.596816 containerd[1712]: time="2025-01-29T12:03:59.595337037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:03:59.596816 containerd[1712]: time="2025-01-29T12:03:59.595403538Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:03:59.596816 containerd[1712]: time="2025-01-29T12:03:59.595424839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.596816 containerd[1712]: time="2025-01-29T12:03:59.595518941Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:03:59.599219 systemd[1]: Started cri-containerd-b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b.scope - libcontainer container b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b. Jan 29 12:03:59.630229 systemd[1]: Started cri-containerd-8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc.scope - libcontainer container 8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc. Jan 29 12:03:59.689740 containerd[1712]: time="2025-01-29T12:03:59.689666714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54d96776db-zgpqq,Uid:20cf8bd9-7e52-4094-8e72-0357f70114de,Namespace:calico-system,Attempt:1,} returns sandbox id \"b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b\"" Jan 29 12:03:59.694339 containerd[1712]: time="2025-01-29T12:03:59.693975013Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 12:03:59.697443 containerd[1712]: time="2025-01-29T12:03:59.697383092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-dvzbx,Uid:ad7842c6-0124-41f1-be81-515378bf6b06,Namespace:kube-system,Attempt:1,} returns sandbox id \"8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc\"" Jan 29 12:03:59.704495 containerd[1712]: time="2025-01-29T12:03:59.704448255Z" level=info msg="CreateContainer within sandbox \"8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:03:59.752733 containerd[1712]: time="2025-01-29T12:03:59.752598766Z" level=info msg="CreateContainer within sandbox \"8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2ac111aac9c506d7a666ef61f86dc95e9ddc5aa40159c3264eb3c8be1807fb18\"" Jan 29 12:03:59.757029 containerd[1712]: time="2025-01-29T12:03:59.756106247Z" level=info msg="StartContainer for \"2ac111aac9c506d7a666ef61f86dc95e9ddc5aa40159c3264eb3c8be1807fb18\"" Jan 29 12:03:59.822049 systemd[1]: Started cri-containerd-2ac111aac9c506d7a666ef61f86dc95e9ddc5aa40159c3264eb3c8be1807fb18.scope - libcontainer container 2ac111aac9c506d7a666ef61f86dc95e9ddc5aa40159c3264eb3c8be1807fb18. Jan 29 12:03:59.870030 containerd[1712]: time="2025-01-29T12:03:59.869903374Z" level=info msg="StartContainer for \"2ac111aac9c506d7a666ef61f86dc95e9ddc5aa40159c3264eb3c8be1807fb18\" returns successfully" Jan 29 12:04:00.128874 containerd[1712]: time="2025-01-29T12:04:00.128475841Z" level=info msg="StopPodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\"" Jan 29 12:04:00.129242 containerd[1712]: time="2025-01-29T12:04:00.129195258Z" level=info msg="StopPodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\"" Jan 29 12:04:00.131124 containerd[1712]: time="2025-01-29T12:04:00.131090102Z" level=info msg="StopPodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\"" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.219 [INFO][4932] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.219 [INFO][4932] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" iface="eth0" netns="/var/run/netns/cni-542876a1-fc50-ba75-c7c5-ad1591f6de3c" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.219 [INFO][4932] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" iface="eth0" netns="/var/run/netns/cni-542876a1-fc50-ba75-c7c5-ad1591f6de3c" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.219 [INFO][4932] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" iface="eth0" netns="/var/run/netns/cni-542876a1-fc50-ba75-c7c5-ad1591f6de3c" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.219 [INFO][4932] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.219 [INFO][4932] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.285 [INFO][4950] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.288 [INFO][4950] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.288 [INFO][4950] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.295 [WARNING][4950] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.297 [INFO][4950] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.298 [INFO][4950] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:00.304801 containerd[1712]: 2025-01-29 12:04:00.300 [INFO][4932] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:00.306948 containerd[1712]: time="2025-01-29T12:04:00.304905313Z" level=info msg="TearDown network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" successfully" Jan 29 12:04:00.306948 containerd[1712]: time="2025-01-29T12:04:00.304936214Z" level=info msg="StopPodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" returns successfully" Jan 29 12:04:00.312907 containerd[1712]: time="2025-01-29T12:04:00.310612045Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-vw9l2,Uid:ad2047c6-04db-4422-b1e5-5b03f71d15f2,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:04:00.315526 systemd[1]: run-netns-cni\x2d542876a1\x2dfc50\x2dba75\x2dc7c5\x2dad1591f6de3c.mount: Deactivated successfully. Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.225 [INFO][4924] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.225 [INFO][4924] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" iface="eth0" netns="/var/run/netns/cni-152e3be4-bdcc-f5e8-3987-74b074d258f9" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.225 [INFO][4924] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" iface="eth0" netns="/var/run/netns/cni-152e3be4-bdcc-f5e8-3987-74b074d258f9" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.226 [INFO][4924] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" iface="eth0" netns="/var/run/netns/cni-152e3be4-bdcc-f5e8-3987-74b074d258f9" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.227 [INFO][4924] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.227 [INFO][4924] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.287 [INFO][4951] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.287 [INFO][4951] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.298 [INFO][4951] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.316 [WARNING][4951] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.316 [INFO][4951] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.319 [INFO][4951] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:00.325351 containerd[1712]: 2025-01-29 12:04:00.323 [INFO][4924] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:00.325921 containerd[1712]: time="2025-01-29T12:04:00.325896198Z" level=info msg="TearDown network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" successfully" Jan 29 12:04:00.325963 containerd[1712]: time="2025-01-29T12:04:00.325927498Z" level=info msg="StopPodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" returns successfully" Jan 29 12:04:00.330066 containerd[1712]: time="2025-01-29T12:04:00.328195751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-djhw8,Uid:aa7a0ece-20eb-47fa-a309-d56e36ab93b3,Namespace:calico-apiserver,Attempt:1,}" Jan 29 12:04:00.331405 systemd[1]: run-netns-cni\x2d152e3be4\x2dbdcc\x2df5e8\x2d3987\x2d74b074d258f9.mount: Deactivated successfully. Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.252 [INFO][4931] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.253 [INFO][4931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" iface="eth0" netns="/var/run/netns/cni-a41e90d5-016e-297a-c333-bec9be4e9708" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.255 [INFO][4931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" iface="eth0" netns="/var/run/netns/cni-a41e90d5-016e-297a-c333-bec9be4e9708" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.256 [INFO][4931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" iface="eth0" netns="/var/run/netns/cni-a41e90d5-016e-297a-c333-bec9be4e9708" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.256 [INFO][4931] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.256 [INFO][4931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.314 [INFO][4963] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.315 [INFO][4963] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.320 [INFO][4963] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.333 [WARNING][4963] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.334 [INFO][4963] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.336 [INFO][4963] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:00.338479 containerd[1712]: 2025-01-29 12:04:00.337 [INFO][4931] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:00.343306 containerd[1712]: time="2025-01-29T12:04:00.343207297Z" level=info msg="TearDown network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" successfully" Jan 29 12:04:00.343306 containerd[1712]: time="2025-01-29T12:04:00.343241998Z" level=info msg="StopPodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" returns successfully" Jan 29 12:04:00.344850 containerd[1712]: time="2025-01-29T12:04:00.344816134Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2ffd,Uid:71138072-0e15-4069-b62b-58fc03bf5cf2,Namespace:calico-system,Attempt:1,}" Jan 29 12:04:00.345133 systemd[1]: run-netns-cni\x2da41e90d5\x2d016e\x2d297a\x2dc333\x2dbec9be4e9708.mount: Deactivated successfully. Jan 29 12:04:00.376449 kubelet[3282]: I0129 12:04:00.376372 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-dvzbx" podStartSLOduration=35.376347362 podStartE2EDuration="35.376347362s" podCreationTimestamp="2025-01-29 12:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:00.375155834 +0000 UTC m=+49.352075116" watchObservedRunningTime="2025-01-29 12:04:00.376347362 +0000 UTC m=+49.353266644" Jan 29 12:04:00.631289 systemd-networkd[1579]: calib070048f9fb: Link UP Jan 29 12:04:00.633166 systemd-networkd[1579]: calib070048f9fb: Gained carrier Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.476 [INFO][4977] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.497 [INFO][4977] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0 csi-node-driver- calico-system 71138072-0e15-4069-b62b-58fc03bf5cf2 768 0 2025-01-29 12:03:32 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-76e05e3785 csi-node-driver-s2ffd eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib070048f9fb [] []}} ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.497 [INFO][4977] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.568 [INFO][5010] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" HandleID="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.589 [INFO][5010] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" HandleID="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002937d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-76e05e3785", "pod":"csi-node-driver-s2ffd", "timestamp":"2025-01-29 12:04:00.568408095 +0000 UTC"}, Hostname:"ci-4081.3.0-a-76e05e3785", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.589 [INFO][5010] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.589 [INFO][5010] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.589 [INFO][5010] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-76e05e3785' Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.591 [INFO][5010] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.596 [INFO][5010] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.603 [INFO][5010] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.605 [INFO][5010] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.608 [INFO][5010] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.608 [INFO][5010] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.611 [INFO][5010] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.617 [INFO][5010] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.623 [INFO][5010] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.131/26] block=192.168.106.128/26 handle="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.623 [INFO][5010] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.131/26] handle="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.623 [INFO][5010] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:00.663217 containerd[1712]: 2025-01-29 12:04:00.623 [INFO][5010] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.131/26] IPv6=[] ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" HandleID="k8s-pod-network.559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.664359 containerd[1712]: 2025-01-29 12:04:00.627 [INFO][4977] cni-plugin/k8s.go 386: Populated endpoint ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71138072-0e15-4069-b62b-58fc03bf5cf2", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"", Pod:"csi-node-driver-s2ffd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib070048f9fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:00.664359 containerd[1712]: 2025-01-29 12:04:00.628 [INFO][4977] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.131/32] ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.664359 containerd[1712]: 2025-01-29 12:04:00.628 [INFO][4977] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib070048f9fb ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.664359 containerd[1712]: 2025-01-29 12:04:00.630 [INFO][4977] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.664359 containerd[1712]: 2025-01-29 12:04:00.630 [INFO][4977] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71138072-0e15-4069-b62b-58fc03bf5cf2", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb", Pod:"csi-node-driver-s2ffd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib070048f9fb", MAC:"62:02:58:9a:1d:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:00.664359 containerd[1712]: 2025-01-29 12:04:00.661 [INFO][4977] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb" Namespace="calico-system" Pod="csi-node-driver-s2ffd" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:00.697084 systemd-networkd[1579]: cali598963082d4: Gained IPv6LL Jan 29 12:04:00.744771 systemd-networkd[1579]: cali8044648e1b0: Link UP Jan 29 12:04:00.748132 systemd-networkd[1579]: cali8044648e1b0: Gained carrier Jan 29 12:04:00.751133 containerd[1712]: time="2025-01-29T12:04:00.750866106Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:00.755033 containerd[1712]: time="2025-01-29T12:04:00.754248984Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:00.755033 containerd[1712]: time="2025-01-29T12:04:00.754277684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:00.762599 containerd[1712]: time="2025-01-29T12:04:00.762386771Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.501 [INFO][4985] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.517 [INFO][4985] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0 calico-apiserver-559dc9496c- calico-apiserver aa7a0ece-20eb-47fa-a309-d56e36ab93b3 767 0 2025-01-29 12:03:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:559dc9496c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-76e05e3785 calico-apiserver-559dc9496c-djhw8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali8044648e1b0 [] []}} ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.518 [INFO][4985] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.574 [INFO][5014] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" HandleID="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.590 [INFO][5014] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" HandleID="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-76e05e3785", "pod":"calico-apiserver-559dc9496c-djhw8", "timestamp":"2025-01-29 12:04:00.57470614 +0000 UTC"}, Hostname:"ci-4081.3.0-a-76e05e3785", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.590 [INFO][5014] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.623 [INFO][5014] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.624 [INFO][5014] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-76e05e3785' Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.638 [INFO][5014] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.658 [INFO][5014] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.684 [INFO][5014] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.687 [INFO][5014] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.693 [INFO][5014] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.694 [INFO][5014] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.695 [INFO][5014] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2 Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.709 [INFO][5014] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.723 [INFO][5014] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.132/26] block=192.168.106.128/26 handle="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.723 [INFO][5014] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.132/26] handle="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.724 [INFO][5014] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:00.795682 containerd[1712]: 2025-01-29 12:04:00.724 [INFO][5014] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.132/26] IPv6=[] ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" HandleID="k8s-pod-network.248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.800405 containerd[1712]: 2025-01-29 12:04:00.731 [INFO][4985] cni-plugin/k8s.go 386: Populated endpoint ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa7a0ece-20eb-47fa-a309-d56e36ab93b3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"", Pod:"calico-apiserver-559dc9496c-djhw8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8044648e1b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:00.800405 containerd[1712]: 2025-01-29 12:04:00.732 [INFO][4985] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.132/32] ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.800405 containerd[1712]: 2025-01-29 12:04:00.732 [INFO][4985] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8044648e1b0 ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.800405 containerd[1712]: 2025-01-29 12:04:00.745 [INFO][4985] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.800405 containerd[1712]: 2025-01-29 12:04:00.757 [INFO][4985] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa7a0ece-20eb-47fa-a309-d56e36ab93b3", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2", Pod:"calico-apiserver-559dc9496c-djhw8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8044648e1b0", MAC:"56:3f:84:24:ee:fa", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:00.800405 containerd[1712]: 2025-01-29 12:04:00.791 [INFO][4985] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-djhw8" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:00.799160 systemd[1]: Started cri-containerd-559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb.scope - libcontainer container 559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb. Jan 29 12:04:00.846761 systemd-networkd[1579]: cali43f69006e59: Link UP Jan 29 12:04:00.850677 systemd-networkd[1579]: cali43f69006e59: Gained carrier Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.496 [INFO][4995] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.513 [INFO][4995] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0 calico-apiserver-559dc9496c- calico-apiserver ad2047c6-04db-4422-b1e5-5b03f71d15f2 766 0 2025-01-29 12:03:32 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:559dc9496c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-76e05e3785 calico-apiserver-559dc9496c-vw9l2 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali43f69006e59 [] []}} ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.513 [INFO][4995] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.576 [INFO][5015] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" HandleID="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.592 [INFO][5015] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" HandleID="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b610), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-76e05e3785", "pod":"calico-apiserver-559dc9496c-vw9l2", "timestamp":"2025-01-29 12:04:00.576262576 +0000 UTC"}, Hostname:"ci-4081.3.0-a-76e05e3785", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.592 [INFO][5015] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.724 [INFO][5015] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.725 [INFO][5015] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-76e05e3785' Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.729 [INFO][5015] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.744 [INFO][5015] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.788 [INFO][5015] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.794 [INFO][5015] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.804 [INFO][5015] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.804 [INFO][5015] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.816 [INFO][5015] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.825 [INFO][5015] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.838 [INFO][5015] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.133/26] block=192.168.106.128/26 handle="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.838 [INFO][5015] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.133/26] handle="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.838 [INFO][5015] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:00.869717 containerd[1712]: 2025-01-29 12:04:00.839 [INFO][5015] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.133/26] IPv6=[] ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" HandleID="k8s-pod-network.d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.871801 containerd[1712]: 2025-01-29 12:04:00.842 [INFO][4995] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad2047c6-04db-4422-b1e5-5b03f71d15f2", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"", Pod:"calico-apiserver-559dc9496c-vw9l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43f69006e59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:00.871801 containerd[1712]: 2025-01-29 12:04:00.843 [INFO][4995] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.133/32] ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.871801 containerd[1712]: 2025-01-29 12:04:00.843 [INFO][4995] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali43f69006e59 ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.871801 containerd[1712]: 2025-01-29 12:04:00.845 [INFO][4995] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.871801 containerd[1712]: 2025-01-29 12:04:00.845 [INFO][4995] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad2047c6-04db-4422-b1e5-5b03f71d15f2", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe", Pod:"calico-apiserver-559dc9496c-vw9l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43f69006e59", MAC:"72:cb:1b:e2:42:0e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:00.871801 containerd[1712]: 2025-01-29 12:04:00.864 [INFO][4995] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe" Namespace="calico-apiserver" Pod="calico-apiserver-559dc9496c-vw9l2" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:00.879082 containerd[1712]: time="2025-01-29T12:04:00.878151643Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:00.879082 containerd[1712]: time="2025-01-29T12:04:00.878228745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:00.879082 containerd[1712]: time="2025-01-29T12:04:00.878252046Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:00.879082 containerd[1712]: time="2025-01-29T12:04:00.878393349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:00.956072 systemd-networkd[1579]: cali29ed562aa6a: Gained IPv6LL Jan 29 12:04:00.958073 systemd[1]: Started cri-containerd-248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2.scope - libcontainer container 248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2. Jan 29 12:04:00.973268 containerd[1712]: time="2025-01-29T12:04:00.973171736Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:00.973647 containerd[1712]: time="2025-01-29T12:04:00.973438442Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:00.973647 containerd[1712]: time="2025-01-29T12:04:00.973508744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:00.974259 containerd[1712]: time="2025-01-29T12:04:00.974128158Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:00.979589 containerd[1712]: time="2025-01-29T12:04:00.979404880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-s2ffd,Uid:71138072-0e15-4069-b62b-58fc03bf5cf2,Namespace:calico-system,Attempt:1,} returns sandbox id \"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb\"" Jan 29 12:04:01.031232 systemd[1]: Started cri-containerd-d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe.scope - libcontainer container d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe. Jan 29 12:04:01.177495 containerd[1712]: time="2025-01-29T12:04:01.177388049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-djhw8,Uid:aa7a0ece-20eb-47fa-a309-d56e36ab93b3,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2\"" Jan 29 12:04:01.199223 containerd[1712]: time="2025-01-29T12:04:01.199173952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-559dc9496c-vw9l2,Uid:ad2047c6-04db-4422-b1e5-5b03f71d15f2,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe\"" Jan 29 12:04:01.372805 kubelet[3282]: I0129 12:04:01.371889 3282 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:01.822008 kernel: bpftool[5224]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 12:04:02.040199 systemd-networkd[1579]: calib070048f9fb: Gained IPv6LL Jan 29 12:04:02.128531 containerd[1712]: time="2025-01-29T12:04:02.128376014Z" level=info msg="StopPodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\"" Jan 29 12:04:02.361342 systemd-networkd[1579]: cali43f69006e59: Gained IPv6LL Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.309 [INFO][5251] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.311 [INFO][5251] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" iface="eth0" netns="/var/run/netns/cni-d05aabd5-837e-0edb-56dc-96e7c4ed3457" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.311 [INFO][5251] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" iface="eth0" netns="/var/run/netns/cni-d05aabd5-837e-0edb-56dc-96e7c4ed3457" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.312 [INFO][5251] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" iface="eth0" netns="/var/run/netns/cni-d05aabd5-837e-0edb-56dc-96e7c4ed3457" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.312 [INFO][5251] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.313 [INFO][5251] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.373 [INFO][5278] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.374 [INFO][5278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.374 [INFO][5278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.386 [WARNING][5278] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.386 [INFO][5278] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.389 [INFO][5278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:02.396344 containerd[1712]: 2025-01-29 12:04:02.391 [INFO][5251] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:02.396344 containerd[1712]: time="2025-01-29T12:04:02.395274592Z" level=info msg="TearDown network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" successfully" Jan 29 12:04:02.396344 containerd[1712]: time="2025-01-29T12:04:02.395312093Z" level=info msg="StopPodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" returns successfully" Jan 29 12:04:02.403649 containerd[1712]: time="2025-01-29T12:04:02.400107236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x2x7f,Uid:fb070daf-6fcc-4f94-819c-4f946e1c33fb,Namespace:kube-system,Attempt:1,}" Jan 29 12:04:02.400692 systemd[1]: run-netns-cni\x2dd05aabd5\x2d837e\x2d0edb\x2d56dc\x2d96e7c4ed3457.mount: Deactivated successfully. Jan 29 12:04:02.489445 systemd-networkd[1579]: cali8044648e1b0: Gained IPv6LL Jan 29 12:04:02.736313 systemd-networkd[1579]: calic71de8377ab: Link UP Jan 29 12:04:02.736513 systemd-networkd[1579]: calic71de8377ab: Gained carrier Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.565 [INFO][5302] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0 coredns-7db6d8ff4d- kube-system fb070daf-6fcc-4f94-819c-4f946e1c33fb 800 0 2025-01-29 12:03:25 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-76e05e3785 coredns-7db6d8ff4d-x2x7f eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic71de8377ab [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.565 [INFO][5302] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.643 [INFO][5318] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" HandleID="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.664 [INFO][5318] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" HandleID="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050a00), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-76e05e3785", "pod":"coredns-7db6d8ff4d-x2x7f", "timestamp":"2025-01-29 12:04:02.642965995 +0000 UTC"}, Hostname:"ci-4081.3.0-a-76e05e3785", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.664 [INFO][5318] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.665 [INFO][5318] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.665 [INFO][5318] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-76e05e3785' Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.669 [INFO][5318] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.678 [INFO][5318] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.687 [INFO][5318] ipam/ipam.go 489: Trying affinity for 192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.689 [INFO][5318] ipam/ipam.go 155: Attempting to load block cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.694 [INFO][5318] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.106.128/26 host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.695 [INFO][5318] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.106.128/26 handle="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.699 [INFO][5318] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8 Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.711 [INFO][5318] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.106.128/26 handle="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.724 [INFO][5318] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.106.134/26] block=192.168.106.128/26 handle="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.725 [INFO][5318] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.106.134/26] handle="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" host="ci-4081.3.0-a-76e05e3785" Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.725 [INFO][5318] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:02.767675 containerd[1712]: 2025-01-29 12:04:02.725 [INFO][5318] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.106.134/26] IPv6=[] ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" HandleID="k8s-pod-network.77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.768705 containerd[1712]: 2025-01-29 12:04:02.731 [INFO][5302] cni-plugin/k8s.go 386: Populated endpoint ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb070daf-6fcc-4f94-819c-4f946e1c33fb", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"", Pod:"coredns-7db6d8ff4d-x2x7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic71de8377ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:02.768705 containerd[1712]: 2025-01-29 12:04:02.732 [INFO][5302] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.106.134/32] ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.768705 containerd[1712]: 2025-01-29 12:04:02.732 [INFO][5302] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic71de8377ab ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.768705 containerd[1712]: 2025-01-29 12:04:02.735 [INFO][5302] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.768705 containerd[1712]: 2025-01-29 12:04:02.736 [INFO][5302] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb070daf-6fcc-4f94-819c-4f946e1c33fb", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8", Pod:"coredns-7db6d8ff4d-x2x7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic71de8377ab", MAC:"ca:82:1f:07:95:b5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:02.768705 containerd[1712]: 2025-01-29 12:04:02.764 [INFO][5302] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8" Namespace="kube-system" Pod="coredns-7db6d8ff4d-x2x7f" WorkloadEndpoint="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:02.838707 systemd-networkd[1579]: vxlan.calico: Link UP Jan 29 12:04:02.838718 systemd-networkd[1579]: vxlan.calico: Gained carrier Jan 29 12:04:02.869615 containerd[1712]: time="2025-01-29T12:04:02.866037762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 12:04:02.870020 containerd[1712]: time="2025-01-29T12:04:02.869919178Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 12:04:02.870020 containerd[1712]: time="2025-01-29T12:04:02.869946479Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.872137 containerd[1712]: time="2025-01-29T12:04:02.871212517Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 12:04:02.921237 systemd[1]: Started cri-containerd-77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8.scope - libcontainer container 77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8. Jan 29 12:04:02.945035 containerd[1712]: time="2025-01-29T12:04:02.944181298Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:02.948894 containerd[1712]: time="2025-01-29T12:04:02.948757635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 12:04:02.952640 containerd[1712]: time="2025-01-29T12:04:02.952438345Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:02.970114 containerd[1712]: time="2025-01-29T12:04:02.968649029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:02.972407 containerd[1712]: time="2025-01-29T12:04:02.972239536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.278203922s" Jan 29 12:04:02.972407 containerd[1712]: time="2025-01-29T12:04:02.972295438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 12:04:02.975260 containerd[1712]: time="2025-01-29T12:04:02.975105222Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 12:04:02.987235 containerd[1712]: time="2025-01-29T12:04:02.987192283Z" level=info msg="CreateContainer within sandbox \"b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 12:04:03.031208 containerd[1712]: time="2025-01-29T12:04:03.031078395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-x2x7f,Uid:fb070daf-6fcc-4f94-819c-4f946e1c33fb,Namespace:kube-system,Attempt:1,} returns sandbox id \"77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8\"" Jan 29 12:04:03.041207 containerd[1712]: time="2025-01-29T12:04:03.041160996Z" level=info msg="CreateContainer within sandbox \"77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 12:04:03.050729 containerd[1712]: time="2025-01-29T12:04:03.050674281Z" level=info msg="CreateContainer within sandbox \"b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238\"" Jan 29 12:04:03.052212 containerd[1712]: time="2025-01-29T12:04:03.052176726Z" level=info msg="StartContainer for \"ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238\"" Jan 29 12:04:03.100404 systemd[1]: Started cri-containerd-ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238.scope - libcontainer container ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238. Jan 29 12:04:03.136180 containerd[1712]: time="2025-01-29T12:04:03.135292810Z" level=info msg="CreateContainer within sandbox \"77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6b86fc0e1e38973a9a2c392bba8db0609468e16f7cc76dfe615323a7274c018d\"" Jan 29 12:04:03.138192 containerd[1712]: time="2025-01-29T12:04:03.136399343Z" level=info msg="StartContainer for \"6b86fc0e1e38973a9a2c392bba8db0609468e16f7cc76dfe615323a7274c018d\"" Jan 29 12:04:03.200297 systemd[1]: Started cri-containerd-6b86fc0e1e38973a9a2c392bba8db0609468e16f7cc76dfe615323a7274c018d.scope - libcontainer container 6b86fc0e1e38973a9a2c392bba8db0609468e16f7cc76dfe615323a7274c018d. Jan 29 12:04:03.236854 containerd[1712]: time="2025-01-29T12:04:03.236695641Z" level=info msg="StartContainer for \"ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238\" returns successfully" Jan 29 12:04:03.265768 containerd[1712]: time="2025-01-29T12:04:03.265553403Z" level=info msg="StartContainer for \"6b86fc0e1e38973a9a2c392bba8db0609468e16f7cc76dfe615323a7274c018d\" returns successfully" Jan 29 12:04:03.399026 kubelet[3282]: I0129 12:04:03.397724 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-x2x7f" podStartSLOduration=38.397701453 podStartE2EDuration="38.397701453s" podCreationTimestamp="2025-01-29 12:03:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 12:04:03.395038673 +0000 UTC m=+52.371958055" watchObservedRunningTime="2025-01-29 12:04:03.397701453 +0000 UTC m=+52.374620735" Jan 29 12:04:03.473813 kubelet[3282]: I0129 12:04:03.473214 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-54d96776db-zgpqq" podStartSLOduration=28.19212251 podStartE2EDuration="31.473187809s" podCreationTimestamp="2025-01-29 12:03:32 +0000 UTC" firstStartedPulling="2025-01-29 12:03:59.692758285 +0000 UTC m=+48.669677567" lastFinishedPulling="2025-01-29 12:04:02.973823584 +0000 UTC m=+51.950742866" observedRunningTime="2025-01-29 12:04:03.454764158 +0000 UTC m=+52.431683440" watchObservedRunningTime="2025-01-29 12:04:03.473187809 +0000 UTC m=+52.450107191" Jan 29 12:04:03.960163 systemd-networkd[1579]: calic71de8377ab: Gained IPv6LL Jan 29 12:04:04.152187 systemd-networkd[1579]: vxlan.calico: Gained IPv6LL Jan 29 12:04:04.435722 containerd[1712]: time="2025-01-29T12:04:04.435665476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:04.443003 containerd[1712]: time="2025-01-29T12:04:04.440265114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 12:04:04.447042 containerd[1712]: time="2025-01-29T12:04:04.447009115Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:04.453436 containerd[1712]: time="2025-01-29T12:04:04.453399706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:04.455001 containerd[1712]: time="2025-01-29T12:04:04.454919552Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.479769328s" Jan 29 12:04:04.455001 containerd[1712]: time="2025-01-29T12:04:04.454960653Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 12:04:04.457253 containerd[1712]: time="2025-01-29T12:04:04.457218920Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:04:04.458675 containerd[1712]: time="2025-01-29T12:04:04.458646163Z" level=info msg="CreateContainer within sandbox \"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 12:04:04.511230 containerd[1712]: time="2025-01-29T12:04:04.511185833Z" level=info msg="CreateContainer within sandbox \"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"6a7f8ac2cd6b299a1b6b34b164a38b2a62122b83cf9c61caafcb568563063898\"" Jan 29 12:04:04.511778 containerd[1712]: time="2025-01-29T12:04:04.511748450Z" level=info msg="StartContainer for \"6a7f8ac2cd6b299a1b6b34b164a38b2a62122b83cf9c61caafcb568563063898\"" Jan 29 12:04:04.549038 systemd[1]: run-containerd-runc-k8s.io-6a7f8ac2cd6b299a1b6b34b164a38b2a62122b83cf9c61caafcb568563063898-runc.9w9T6l.mount: Deactivated successfully. Jan 29 12:04:04.555164 systemd[1]: Started cri-containerd-6a7f8ac2cd6b299a1b6b34b164a38b2a62122b83cf9c61caafcb568563063898.scope - libcontainer container 6a7f8ac2cd6b299a1b6b34b164a38b2a62122b83cf9c61caafcb568563063898. Jan 29 12:04:04.593879 containerd[1712]: time="2025-01-29T12:04:04.593832304Z" level=info msg="StartContainer for \"6a7f8ac2cd6b299a1b6b34b164a38b2a62122b83cf9c61caafcb568563063898\" returns successfully" Jan 29 12:04:07.059454 containerd[1712]: time="2025-01-29T12:04:07.059394596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.062403 containerd[1712]: time="2025-01-29T12:04:07.062327184Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 12:04:07.066752 containerd[1712]: time="2025-01-29T12:04:07.066687414Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.072890 containerd[1712]: time="2025-01-29T12:04:07.072856898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.074057 containerd[1712]: time="2025-01-29T12:04:07.073560319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.616201195s" Jan 29 12:04:07.074057 containerd[1712]: time="2025-01-29T12:04:07.073603321Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:04:07.075707 containerd[1712]: time="2025-01-29T12:04:07.075642982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 12:04:07.077652 containerd[1712]: time="2025-01-29T12:04:07.077184228Z" level=info msg="CreateContainer within sandbox \"248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:04:07.124920 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795533252.mount: Deactivated successfully. Jan 29 12:04:07.126686 containerd[1712]: time="2025-01-29T12:04:07.126644306Z" level=info msg="CreateContainer within sandbox \"248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"b285f75c76f9eb0e99652f6dc9f8cc83e3f82c9642dbda9731962a31caf61473\"" Jan 29 12:04:07.128300 containerd[1712]: time="2025-01-29T12:04:07.128270254Z" level=info msg="StartContainer for \"b285f75c76f9eb0e99652f6dc9f8cc83e3f82c9642dbda9731962a31caf61473\"" Jan 29 12:04:07.171156 systemd[1]: Started cri-containerd-b285f75c76f9eb0e99652f6dc9f8cc83e3f82c9642dbda9731962a31caf61473.scope - libcontainer container b285f75c76f9eb0e99652f6dc9f8cc83e3f82c9642dbda9731962a31caf61473. Jan 29 12:04:07.218175 containerd[1712]: time="2025-01-29T12:04:07.218059438Z" level=info msg="StartContainer for \"b285f75c76f9eb0e99652f6dc9f8cc83e3f82c9642dbda9731962a31caf61473\" returns successfully" Jan 29 12:04:07.514126 containerd[1712]: time="2025-01-29T12:04:07.513164735Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:07.519595 containerd[1712]: time="2025-01-29T12:04:07.519543995Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 12:04:07.520117 containerd[1712]: time="2025-01-29T12:04:07.520080608Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 444.392325ms" Jan 29 12:04:07.520275 containerd[1712]: time="2025-01-29T12:04:07.520234112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 12:04:07.524074 containerd[1712]: time="2025-01-29T12:04:07.523158185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 12:04:07.524890 containerd[1712]: time="2025-01-29T12:04:07.524864127Z" level=info msg="CreateContainer within sandbox \"d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 12:04:07.566593 containerd[1712]: time="2025-01-29T12:04:07.566534167Z" level=info msg="CreateContainer within sandbox \"d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"0734cbe9c2b0771ea72f8710822d1df0e5ba771b78eef692ec0ee3d1ed176786\"" Jan 29 12:04:07.567537 containerd[1712]: time="2025-01-29T12:04:07.567499291Z" level=info msg="StartContainer for \"0734cbe9c2b0771ea72f8710822d1df0e5ba771b78eef692ec0ee3d1ed176786\"" Jan 29 12:04:07.598211 systemd[1]: Started cri-containerd-0734cbe9c2b0771ea72f8710822d1df0e5ba771b78eef692ec0ee3d1ed176786.scope - libcontainer container 0734cbe9c2b0771ea72f8710822d1df0e5ba771b78eef692ec0ee3d1ed176786. Jan 29 12:04:07.660382 containerd[1712]: time="2025-01-29T12:04:07.660318808Z" level=info msg="StartContainer for \"0734cbe9c2b0771ea72f8710822d1df0e5ba771b78eef692ec0ee3d1ed176786\" returns successfully" Jan 29 12:04:08.409404 kubelet[3282]: I0129 12:04:08.409291 3282 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:04:08.432929 kubelet[3282]: I0129 12:04:08.432742 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-559dc9496c-vw9l2" podStartSLOduration=30.111752521 podStartE2EDuration="36.432715183s" podCreationTimestamp="2025-01-29 12:03:32 +0000 UTC" firstStartedPulling="2025-01-29 12:04:01.20169781 +0000 UTC m=+50.178617092" lastFinishedPulling="2025-01-29 12:04:07.522660372 +0000 UTC m=+56.499579754" observedRunningTime="2025-01-29 12:04:08.431648457 +0000 UTC m=+57.408567839" watchObservedRunningTime="2025-01-29 12:04:08.432715183 +0000 UTC m=+57.409634565" Jan 29 12:04:08.434116 kubelet[3282]: I0129 12:04:08.433056 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-559dc9496c-djhw8" podStartSLOduration=30.538629556 podStartE2EDuration="36.433042592s" podCreationTimestamp="2025-01-29 12:03:32 +0000 UTC" firstStartedPulling="2025-01-29 12:04:01.180210215 +0000 UTC m=+50.157129497" lastFinishedPulling="2025-01-29 12:04:07.074623251 +0000 UTC m=+56.051542533" observedRunningTime="2025-01-29 12:04:07.412176815 +0000 UTC m=+56.389096197" watchObservedRunningTime="2025-01-29 12:04:08.433042592 +0000 UTC m=+57.409961974" Jan 29 12:04:09.147623 containerd[1712]: time="2025-01-29T12:04:09.147551623Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:09.152015 containerd[1712]: time="2025-01-29T12:04:09.151906531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 12:04:09.157607 containerd[1712]: time="2025-01-29T12:04:09.157558973Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:09.162974 containerd[1712]: time="2025-01-29T12:04:09.162933407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 12:04:09.164352 containerd[1712]: time="2025-01-29T12:04:09.164313641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.641119956s" Jan 29 12:04:09.164452 containerd[1712]: time="2025-01-29T12:04:09.164365842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 12:04:09.167775 containerd[1712]: time="2025-01-29T12:04:09.167736426Z" level=info msg="CreateContainer within sandbox \"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 12:04:09.224258 containerd[1712]: time="2025-01-29T12:04:09.224195135Z" level=info msg="CreateContainer within sandbox \"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"3e43c9c6ef3b6554cebd93e95c41416576776354d4e9adf4ee819f8281f6c061\"" Jan 29 12:04:09.227592 containerd[1712]: time="2025-01-29T12:04:09.227536719Z" level=info msg="StartContainer for \"3e43c9c6ef3b6554cebd93e95c41416576776354d4e9adf4ee819f8281f6c061\"" Jan 29 12:04:09.299461 systemd[1]: Started cri-containerd-3e43c9c6ef3b6554cebd93e95c41416576776354d4e9adf4ee819f8281f6c061.scope - libcontainer container 3e43c9c6ef3b6554cebd93e95c41416576776354d4e9adf4ee819f8281f6c061. Jan 29 12:04:09.362920 containerd[1712]: time="2025-01-29T12:04:09.362241481Z" level=info msg="StartContainer for \"3e43c9c6ef3b6554cebd93e95c41416576776354d4e9adf4ee819f8281f6c061\" returns successfully" Jan 29 12:04:09.594463 kubelet[3282]: I0129 12:04:09.594418 3282 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 12:04:09.596724 kubelet[3282]: I0129 12:04:09.595266 3282 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 12:04:09.638019 kubelet[3282]: I0129 12:04:09.637106 3282 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-s2ffd" podStartSLOduration=29.453707212 podStartE2EDuration="37.637079439s" podCreationTimestamp="2025-01-29 12:03:32 +0000 UTC" firstStartedPulling="2025-01-29 12:04:00.981737434 +0000 UTC m=+49.958656716" lastFinishedPulling="2025-01-29 12:04:09.165109661 +0000 UTC m=+58.142028943" observedRunningTime="2025-01-29 12:04:09.44517155 +0000 UTC m=+58.422090932" watchObservedRunningTime="2025-01-29 12:04:09.637079439 +0000 UTC m=+58.613998721" Jan 29 12:04:11.132826 containerd[1712]: time="2025-01-29T12:04:11.132774021Z" level=info msg="StopPodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\"" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.174 [WARNING][5769] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb070daf-6fcc-4f94-819c-4f946e1c33fb", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8", Pod:"coredns-7db6d8ff4d-x2x7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic71de8377ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.175 [INFO][5769] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.175 [INFO][5769] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" iface="eth0" netns="" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.175 [INFO][5769] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.175 [INFO][5769] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.195 [INFO][5775] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.195 [INFO][5775] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.195 [INFO][5775] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.200 [WARNING][5775] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.200 [INFO][5775] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.203 [INFO][5775] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.205068 containerd[1712]: 2025-01-29 12:04:11.204 [INFO][5769] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.206057 containerd[1712]: time="2025-01-29T12:04:11.205087316Z" level=info msg="TearDown network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" successfully" Jan 29 12:04:11.206057 containerd[1712]: time="2025-01-29T12:04:11.205122517Z" level=info msg="StopPodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" returns successfully" Jan 29 12:04:11.206057 containerd[1712]: time="2025-01-29T12:04:11.205903937Z" level=info msg="RemovePodSandbox for \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\"" Jan 29 12:04:11.206057 containerd[1712]: time="2025-01-29T12:04:11.205947238Z" level=info msg="Forcibly stopping sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\"" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.261 [WARNING][5793] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"fb070daf-6fcc-4f94-819c-4f946e1c33fb", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"77b4eb283ab3ce50f18fba018538b85d2af68ea04efaed87dbad6e6a4a47d1a8", Pod:"coredns-7db6d8ff4d-x2x7f", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic71de8377ab", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.261 [INFO][5793] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.261 [INFO][5793] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" iface="eth0" netns="" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.261 [INFO][5793] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.261 [INFO][5793] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.293 [INFO][5799] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.293 [INFO][5799] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.293 [INFO][5799] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.298 [WARNING][5799] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.298 [INFO][5799] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" HandleID="k8s-pod-network.3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--x2x7f-eth0" Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.299 [INFO][5799] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.301536 containerd[1712]: 2025-01-29 12:04:11.300 [INFO][5793] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f" Jan 29 12:04:11.302495 containerd[1712]: time="2025-01-29T12:04:11.301581412Z" level=info msg="TearDown network for sandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" successfully" Jan 29 12:04:11.314550 containerd[1712]: time="2025-01-29T12:04:11.314492332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:11.314702 containerd[1712]: time="2025-01-29T12:04:11.314587235Z" level=info msg="RemovePodSandbox \"3ee25e9069dc9b2edb7c042a360412b44045c7fba11f05b26c4e0f032a960c5f\" returns successfully" Jan 29 12:04:11.315366 containerd[1712]: time="2025-01-29T12:04:11.315332153Z" level=info msg="StopPodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\"" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.350 [WARNING][5817] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad7842c6-0124-41f1-be81-515378bf6b06", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc", Pod:"coredns-7db6d8ff4d-dvzbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali598963082d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.350 [INFO][5817] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.350 [INFO][5817] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" iface="eth0" netns="" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.350 [INFO][5817] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.350 [INFO][5817] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.373 [INFO][5823] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.373 [INFO][5823] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.373 [INFO][5823] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.379 [WARNING][5823] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.379 [INFO][5823] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.381 [INFO][5823] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.383239 containerd[1712]: 2025-01-29 12:04:11.382 [INFO][5817] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.383239 containerd[1712]: time="2025-01-29T12:04:11.383213338Z" level=info msg="TearDown network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" successfully" Jan 29 12:04:11.384786 containerd[1712]: time="2025-01-29T12:04:11.383248339Z" level=info msg="StopPodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" returns successfully" Jan 29 12:04:11.385487 containerd[1712]: time="2025-01-29T12:04:11.385453494Z" level=info msg="RemovePodSandbox for \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\"" Jan 29 12:04:11.385487 containerd[1712]: time="2025-01-29T12:04:11.385491195Z" level=info msg="Forcibly stopping sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\"" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.421 [WARNING][5841] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"ad7842c6-0124-41f1-be81-515378bf6b06", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"8015037e447606324a56c90eeaa2bd5195cef5ff45541be2d458d8d144e4b7bc", Pod:"coredns-7db6d8ff4d-dvzbx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.106.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali598963082d4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.421 [INFO][5841] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.421 [INFO][5841] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" iface="eth0" netns="" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.421 [INFO][5841] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.422 [INFO][5841] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.442 [INFO][5848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.442 [INFO][5848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.442 [INFO][5848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.450 [WARNING][5848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.450 [INFO][5848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" HandleID="k8s-pod-network.b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Workload="ci--4081.3.0--a--76e05e3785-k8s-coredns--7db6d8ff4d--dvzbx-eth0" Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.451 [INFO][5848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.453779 containerd[1712]: 2025-01-29 12:04:11.452 [INFO][5841] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72" Jan 29 12:04:11.454513 containerd[1712]: time="2025-01-29T12:04:11.453826391Z" level=info msg="TearDown network for sandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" successfully" Jan 29 12:04:11.461754 containerd[1712]: time="2025-01-29T12:04:11.461667686Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:11.461915 containerd[1712]: time="2025-01-29T12:04:11.461760988Z" level=info msg="RemovePodSandbox \"b67cdc6ce2c4d3db48ddc9ad4c066ceef680d89012e25e64c0ca6dc5a6702b72\" returns successfully" Jan 29 12:04:11.463545 containerd[1712]: time="2025-01-29T12:04:11.463518532Z" level=info msg="StopPodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\"" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.503 [WARNING][5867] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad2047c6-04db-4422-b1e5-5b03f71d15f2", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe", Pod:"calico-apiserver-559dc9496c-vw9l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43f69006e59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.503 [INFO][5867] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.503 [INFO][5867] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" iface="eth0" netns="" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.503 [INFO][5867] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.503 [INFO][5867] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.525 [INFO][5873] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.525 [INFO][5873] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.525 [INFO][5873] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.531 [WARNING][5873] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.531 [INFO][5873] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.534 [INFO][5873] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.536158 containerd[1712]: 2025-01-29 12:04:11.535 [INFO][5867] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.536808 containerd[1712]: time="2025-01-29T12:04:11.536221636Z" level=info msg="TearDown network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" successfully" Jan 29 12:04:11.536808 containerd[1712]: time="2025-01-29T12:04:11.536269637Z" level=info msg="StopPodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" returns successfully" Jan 29 12:04:11.537196 containerd[1712]: time="2025-01-29T12:04:11.537159760Z" level=info msg="RemovePodSandbox for \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\"" Jan 29 12:04:11.537196 containerd[1712]: time="2025-01-29T12:04:11.537196560Z" level=info msg="Forcibly stopping sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\"" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.585 [WARNING][5891] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"ad2047c6-04db-4422-b1e5-5b03f71d15f2", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"d53003fc1b1bfe9a8e93df0b83db2f37191c2f80587079a9328dee79145b14fe", Pod:"calico-apiserver-559dc9496c-vw9l2", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali43f69006e59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.585 [INFO][5891] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.585 [INFO][5891] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" iface="eth0" netns="" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.585 [INFO][5891] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.585 [INFO][5891] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.609 [INFO][5898] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.610 [INFO][5898] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.610 [INFO][5898] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.617 [WARNING][5898] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.617 [INFO][5898] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" HandleID="k8s-pod-network.a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--vw9l2-eth0" Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.618 [INFO][5898] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.621046 containerd[1712]: 2025-01-29 12:04:11.619 [INFO][5891] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42" Jan 29 12:04:11.622045 containerd[1712]: time="2025-01-29T12:04:11.621116144Z" level=info msg="TearDown network for sandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" successfully" Jan 29 12:04:11.629851 containerd[1712]: time="2025-01-29T12:04:11.629786759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:11.630052 containerd[1712]: time="2025-01-29T12:04:11.629877961Z" level=info msg="RemovePodSandbox \"a1e1807fc206a81c77d007875237cdf64c4700fded589a2fcf63cfb412c56e42\" returns successfully" Jan 29 12:04:11.630497 containerd[1712]: time="2025-01-29T12:04:11.630452675Z" level=info msg="StopPodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\"" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.669 [WARNING][5916] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71138072-0e15-4069-b62b-58fc03bf5cf2", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb", Pod:"csi-node-driver-s2ffd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib070048f9fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.670 [INFO][5916] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.670 [INFO][5916] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" iface="eth0" netns="" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.670 [INFO][5916] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.670 [INFO][5916] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.703 [INFO][5922] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.703 [INFO][5922] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.703 [INFO][5922] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.721 [WARNING][5922] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.721 [INFO][5922] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.723 [INFO][5922] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.727089 containerd[1712]: 2025-01-29 12:04:11.725 [INFO][5916] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.728941 containerd[1712]: time="2025-01-29T12:04:11.726968171Z" level=info msg="TearDown network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" successfully" Jan 29 12:04:11.728941 containerd[1712]: time="2025-01-29T12:04:11.728104499Z" level=info msg="StopPodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" returns successfully" Jan 29 12:04:11.730538 containerd[1712]: time="2025-01-29T12:04:11.730311954Z" level=info msg="RemovePodSandbox for \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\"" Jan 29 12:04:11.730538 containerd[1712]: time="2025-01-29T12:04:11.730350455Z" level=info msg="Forcibly stopping sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\"" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.827 [WARNING][5940] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71138072-0e15-4069-b62b-58fc03bf5cf2", ResourceVersion:"861", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"559e1e35e095f887dc837168c60b6489a1f3e9597d98fa5a7fd2402e52131ddb", Pod:"csi-node-driver-s2ffd", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.106.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib070048f9fb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.827 [INFO][5940] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.828 [INFO][5940] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" iface="eth0" netns="" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.828 [INFO][5940] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.828 [INFO][5940] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.851 [INFO][5947] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.852 [INFO][5947] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.852 [INFO][5947] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.858 [WARNING][5947] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.858 [INFO][5947] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" HandleID="k8s-pod-network.2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Workload="ci--4081.3.0--a--76e05e3785-k8s-csi--node--driver--s2ffd-eth0" Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.860 [INFO][5947] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.862432 containerd[1712]: 2025-01-29 12:04:11.861 [INFO][5940] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133" Jan 29 12:04:11.863172 containerd[1712]: time="2025-01-29T12:04:11.862489135Z" level=info msg="TearDown network for sandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" successfully" Jan 29 12:04:11.870732 containerd[1712]: time="2025-01-29T12:04:11.870671438Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:11.870893 containerd[1712]: time="2025-01-29T12:04:11.870763241Z" level=info msg="RemovePodSandbox \"2fbbbe8edfe8eb492cfed92706fb7659a02b1852c6df6f524f6e77df49bcb133\" returns successfully" Jan 29 12:04:11.871443 containerd[1712]: time="2025-01-29T12:04:11.871381556Z" level=info msg="StopPodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\"" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.908 [WARNING][5965] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0", GenerateName:"calico-kube-controllers-54d96776db-", Namespace:"calico-system", SelfLink:"", UID:"20cf8bd9-7e52-4094-8e72-0357f70114de", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d96776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b", Pod:"calico-kube-controllers-54d96776db-zgpqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29ed562aa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.908 [INFO][5965] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.908 [INFO][5965] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" iface="eth0" netns="" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.908 [INFO][5965] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.908 [INFO][5965] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.930 [INFO][5971] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.930 [INFO][5971] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.930 [INFO][5971] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.936 [WARNING][5971] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.936 [INFO][5971] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.938 [INFO][5971] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:11.940719 containerd[1712]: 2025-01-29 12:04:11.939 [INFO][5965] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:11.941466 containerd[1712]: time="2025-01-29T12:04:11.940970483Z" level=info msg="TearDown network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" successfully" Jan 29 12:04:11.941466 containerd[1712]: time="2025-01-29T12:04:11.941035385Z" level=info msg="StopPodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" returns successfully" Jan 29 12:04:11.941947 containerd[1712]: time="2025-01-29T12:04:11.941920407Z" level=info msg="RemovePodSandbox for \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\"" Jan 29 12:04:11.942055 containerd[1712]: time="2025-01-29T12:04:11.941953908Z" level=info msg="Forcibly stopping sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\"" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:11.981 [WARNING][5989] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0", GenerateName:"calico-kube-controllers-54d96776db-", Namespace:"calico-system", SelfLink:"", UID:"20cf8bd9-7e52-4094-8e72-0357f70114de", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54d96776db", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"b6e1abd9bd79fa42117cba541c8c353817781f2ef08054f5ef1b80a5c9810c7b", Pod:"calico-kube-controllers-54d96776db-zgpqq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.106.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali29ed562aa6a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:11.981 [INFO][5989] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:11.981 [INFO][5989] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" iface="eth0" netns="" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:11.981 [INFO][5989] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:11.981 [INFO][5989] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.001 [INFO][5995] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.001 [INFO][5995] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.001 [INFO][5995] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.007 [WARNING][5995] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.007 [INFO][5995] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" HandleID="k8s-pod-network.461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--kube--controllers--54d96776db--zgpqq-eth0" Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.009 [INFO][5995] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:12.012075 containerd[1712]: 2025-01-29 12:04:12.010 [INFO][5989] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e" Jan 29 12:04:12.012075 containerd[1712]: time="2025-01-29T12:04:12.011671538Z" level=info msg="TearDown network for sandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" successfully" Jan 29 12:04:12.021815 containerd[1712]: time="2025-01-29T12:04:12.021524883Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:12.021815 containerd[1712]: time="2025-01-29T12:04:12.021610385Z" level=info msg="RemovePodSandbox \"461c2869409ac14c898b746c552a99b81709692cd99df674067cc83ac805557e\" returns successfully" Jan 29 12:04:12.022242 containerd[1712]: time="2025-01-29T12:04:12.022210800Z" level=info msg="StopPodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\"" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.060 [WARNING][6013] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa7a0ece-20eb-47fa-a309-d56e36ab93b3", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2", Pod:"calico-apiserver-559dc9496c-djhw8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8044648e1b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.060 [INFO][6013] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.060 [INFO][6013] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" iface="eth0" netns="" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.061 [INFO][6013] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.061 [INFO][6013] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.091 [INFO][6020] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.091 [INFO][6020] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.091 [INFO][6020] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.098 [WARNING][6020] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.098 [INFO][6020] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.100 [INFO][6020] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:12.102445 containerd[1712]: 2025-01-29 12:04:12.101 [INFO][6013] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.104253 containerd[1712]: time="2025-01-29T12:04:12.102493993Z" level=info msg="TearDown network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" successfully" Jan 29 12:04:12.104253 containerd[1712]: time="2025-01-29T12:04:12.102529794Z" level=info msg="StopPodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" returns successfully" Jan 29 12:04:12.104253 containerd[1712]: time="2025-01-29T12:04:12.103218111Z" level=info msg="RemovePodSandbox for \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\"" Jan 29 12:04:12.104253 containerd[1712]: time="2025-01-29T12:04:12.103255412Z" level=info msg="Forcibly stopping sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\"" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.150 [WARNING][6039] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0", GenerateName:"calico-apiserver-559dc9496c-", Namespace:"calico-apiserver", SelfLink:"", UID:"aa7a0ece-20eb-47fa-a309-d56e36ab93b3", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 12, 3, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"559dc9496c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-76e05e3785", ContainerID:"248af358f1ba6abc02495f45f458442a4bd6acd4320cb2a7f2ccf2e08b61e3e2", Pod:"calico-apiserver-559dc9496c-djhw8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.106.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali8044648e1b0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.151 [INFO][6039] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.151 [INFO][6039] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" iface="eth0" netns="" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.151 [INFO][6039] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.151 [INFO][6039] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.209 [INFO][6045] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.209 [INFO][6045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.209 [INFO][6045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.227 [WARNING][6045] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.227 [INFO][6045] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" HandleID="k8s-pod-network.7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Workload="ci--4081.3.0--a--76e05e3785-k8s-calico--apiserver--559dc9496c--djhw8-eth0" Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.229 [INFO][6045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 12:04:12.236019 containerd[1712]: 2025-01-29 12:04:12.233 [INFO][6039] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768" Jan 29 12:04:12.236019 containerd[1712]: time="2025-01-29T12:04:12.234485969Z" level=info msg="TearDown network for sandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" successfully" Jan 29 12:04:12.247245 containerd[1712]: time="2025-01-29T12:04:12.247153384Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 12:04:12.247506 containerd[1712]: time="2025-01-29T12:04:12.247483292Z" level=info msg="RemovePodSandbox \"7f2677d19126626e12648d1dd079e749f4e0d83c003cde978a5d2c0f2955b768\" returns successfully" Jan 29 12:04:42.984866 kubelet[3282]: I0129 12:04:42.984502 3282 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 12:05:16.238337 systemd[1]: Started sshd@7-10.200.8.19:22-10.200.16.10:51290.service - OpenSSH per-connection server daemon (10.200.16.10:51290). Jan 29 12:05:16.893289 sshd[6231]: Accepted publickey for core from 10.200.16.10 port 51290 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:16.896110 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:16.903204 systemd-logind[1693]: New session 10 of user core. Jan 29 12:05:16.908214 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 12:05:17.443206 sshd[6231]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:17.450708 systemd[1]: sshd@7-10.200.8.19:22-10.200.16.10:51290.service: Deactivated successfully. Jan 29 12:05:17.453761 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 12:05:17.454585 systemd-logind[1693]: Session 10 logged out. Waiting for processes to exit. Jan 29 12:05:17.456331 systemd-logind[1693]: Removed session 10. Jan 29 12:05:22.567314 systemd[1]: Started sshd@8-10.200.8.19:22-10.200.16.10:51304.service - OpenSSH per-connection server daemon (10.200.16.10:51304). Jan 29 12:05:23.221838 sshd[6246]: Accepted publickey for core from 10.200.16.10 port 51304 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:23.224071 sshd[6246]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:23.230164 systemd-logind[1693]: New session 11 of user core. Jan 29 12:05:23.234209 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 12:05:23.752087 sshd[6246]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:23.755597 systemd[1]: sshd@8-10.200.8.19:22-10.200.16.10:51304.service: Deactivated successfully. Jan 29 12:05:23.758166 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 12:05:23.759744 systemd-logind[1693]: Session 11 logged out. Waiting for processes to exit. Jan 29 12:05:23.761726 systemd-logind[1693]: Removed session 11. Jan 29 12:05:28.868203 systemd[1]: Started sshd@9-10.200.8.19:22-10.200.16.10:39694.service - OpenSSH per-connection server daemon (10.200.16.10:39694). Jan 29 12:05:29.525836 sshd[6270]: Accepted publickey for core from 10.200.16.10 port 39694 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:29.526611 sshd[6270]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:29.532072 systemd-logind[1693]: New session 12 of user core. Jan 29 12:05:29.537161 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 12:05:30.045323 sshd[6270]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:30.050016 systemd[1]: sshd@9-10.200.8.19:22-10.200.16.10:39694.service: Deactivated successfully. Jan 29 12:05:30.052325 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 12:05:30.053092 systemd-logind[1693]: Session 12 logged out. Waiting for processes to exit. Jan 29 12:05:30.054129 systemd-logind[1693]: Removed session 12. Jan 29 12:05:35.168387 systemd[1]: Started sshd@10-10.200.8.19:22-10.200.16.10:39702.service - OpenSSH per-connection server daemon (10.200.16.10:39702). Jan 29 12:05:35.629920 systemd[1]: run-containerd-runc-k8s.io-d7cfd09a717746f2194d1e7d3e3eebf57038ec56105a02ec9d4c70481e83036e-runc.dZP5C5.mount: Deactivated successfully. Jan 29 12:05:35.816415 sshd[6289]: Accepted publickey for core from 10.200.16.10 port 39702 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:35.818097 sshd[6289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:35.822947 systemd-logind[1693]: New session 13 of user core. Jan 29 12:05:35.828140 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 12:05:36.335618 sshd[6289]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:36.339494 systemd[1]: sshd@10-10.200.8.19:22-10.200.16.10:39702.service: Deactivated successfully. Jan 29 12:05:36.343105 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 12:05:36.344957 systemd-logind[1693]: Session 13 logged out. Waiting for processes to exit. Jan 29 12:05:36.346369 systemd-logind[1693]: Removed session 13. Jan 29 12:05:41.456299 systemd[1]: Started sshd@11-10.200.8.19:22-10.200.16.10:33936.service - OpenSSH per-connection server daemon (10.200.16.10:33936). Jan 29 12:05:42.101289 sshd[6336]: Accepted publickey for core from 10.200.16.10 port 33936 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:42.102896 sshd[6336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:42.108116 systemd-logind[1693]: New session 14 of user core. Jan 29 12:05:42.113134 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 12:05:42.618477 sshd[6336]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:42.623357 systemd[1]: sshd@11-10.200.8.19:22-10.200.16.10:33936.service: Deactivated successfully. Jan 29 12:05:42.625866 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 12:05:42.626907 systemd-logind[1693]: Session 14 logged out. Waiting for processes to exit. Jan 29 12:05:42.628325 systemd-logind[1693]: Removed session 14. Jan 29 12:05:42.741628 systemd[1]: Started sshd@12-10.200.8.19:22-10.200.16.10:33948.service - OpenSSH per-connection server daemon (10.200.16.10:33948). Jan 29 12:05:43.397348 sshd[6350]: Accepted publickey for core from 10.200.16.10 port 33948 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:43.399062 sshd[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:43.404695 systemd-logind[1693]: New session 15 of user core. Jan 29 12:05:43.410201 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 12:05:43.955698 sshd[6350]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:43.960886 systemd-logind[1693]: Session 15 logged out. Waiting for processes to exit. Jan 29 12:05:43.961562 systemd[1]: sshd@12-10.200.8.19:22-10.200.16.10:33948.service: Deactivated successfully. Jan 29 12:05:43.966062 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 12:05:43.967098 systemd-logind[1693]: Removed session 15. Jan 29 12:05:44.085304 systemd[1]: Started sshd@13-10.200.8.19:22-10.200.16.10:33950.service - OpenSSH per-connection server daemon (10.200.16.10:33950). Jan 29 12:05:44.731075 sshd[6361]: Accepted publickey for core from 10.200.16.10 port 33950 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:44.732810 sshd[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:44.737045 systemd-logind[1693]: New session 16 of user core. Jan 29 12:05:44.742172 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 12:05:45.255679 sshd[6361]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:45.260348 systemd[1]: sshd@13-10.200.8.19:22-10.200.16.10:33950.service: Deactivated successfully. Jan 29 12:05:45.262596 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 12:05:45.263932 systemd-logind[1693]: Session 16 logged out. Waiting for processes to exit. Jan 29 12:05:45.265029 systemd-logind[1693]: Removed session 16. Jan 29 12:05:45.359610 systemd[1]: run-containerd-runc-k8s.io-ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238-runc.zcmcj7.mount: Deactivated successfully. Jan 29 12:05:50.378297 systemd[1]: Started sshd@14-10.200.8.19:22-10.200.16.10:39730.service - OpenSSH per-connection server daemon (10.200.16.10:39730). Jan 29 12:05:51.026019 sshd[6391]: Accepted publickey for core from 10.200.16.10 port 39730 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:51.027719 sshd[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:51.032046 systemd-logind[1693]: New session 17 of user core. Jan 29 12:05:51.035182 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 12:05:51.551041 sshd[6391]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:51.554260 systemd[1]: sshd@14-10.200.8.19:22-10.200.16.10:39730.service: Deactivated successfully. Jan 29 12:05:51.556855 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 12:05:51.558506 systemd-logind[1693]: Session 17 logged out. Waiting for processes to exit. Jan 29 12:05:51.560670 systemd-logind[1693]: Removed session 17. Jan 29 12:05:56.674327 systemd[1]: Started sshd@15-10.200.8.19:22-10.200.16.10:40882.service - OpenSSH per-connection server daemon (10.200.16.10:40882). Jan 29 12:05:57.320399 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 40882 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:05:57.322131 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:05:57.327891 systemd-logind[1693]: New session 18 of user core. Jan 29 12:05:57.333195 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 12:05:57.842930 sshd[6406]: pam_unix(sshd:session): session closed for user core Jan 29 12:05:57.847570 systemd[1]: sshd@15-10.200.8.19:22-10.200.16.10:40882.service: Deactivated successfully. Jan 29 12:05:57.850364 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 12:05:57.851392 systemd-logind[1693]: Session 18 logged out. Waiting for processes to exit. Jan 29 12:05:57.852926 systemd-logind[1693]: Removed session 18. Jan 29 12:06:02.963474 systemd[1]: Started sshd@16-10.200.8.19:22-10.200.16.10:40896.service - OpenSSH per-connection server daemon (10.200.16.10:40896). Jan 29 12:06:03.624878 sshd[6423]: Accepted publickey for core from 10.200.16.10 port 40896 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:03.626515 sshd[6423]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:03.631542 systemd-logind[1693]: New session 19 of user core. Jan 29 12:06:03.636232 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 12:06:04.143536 sshd[6423]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:04.148067 systemd-logind[1693]: Session 19 logged out. Waiting for processes to exit. Jan 29 12:06:04.148638 systemd[1]: sshd@16-10.200.8.19:22-10.200.16.10:40896.service: Deactivated successfully. Jan 29 12:06:04.151958 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 12:06:04.156917 systemd-logind[1693]: Removed session 19. Jan 29 12:06:09.264295 systemd[1]: Started sshd@17-10.200.8.19:22-10.200.16.10:54270.service - OpenSSH per-connection server daemon (10.200.16.10:54270). Jan 29 12:06:09.920023 sshd[6458]: Accepted publickey for core from 10.200.16.10 port 54270 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:09.921645 sshd[6458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:09.926060 systemd-logind[1693]: New session 20 of user core. Jan 29 12:06:09.933150 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 12:06:10.442429 sshd[6458]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:10.445457 systemd[1]: sshd@17-10.200.8.19:22-10.200.16.10:54270.service: Deactivated successfully. Jan 29 12:06:10.448329 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 12:06:10.450498 systemd-logind[1693]: Session 20 logged out. Waiting for processes to exit. Jan 29 12:06:10.451824 systemd-logind[1693]: Removed session 20. Jan 29 12:06:15.560295 systemd[1]: Started sshd@18-10.200.8.19:22-10.200.16.10:54272.service - OpenSSH per-connection server daemon (10.200.16.10:54272). Jan 29 12:06:16.209747 sshd[6510]: Accepted publickey for core from 10.200.16.10 port 54272 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:16.211434 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:16.216701 systemd-logind[1693]: New session 21 of user core. Jan 29 12:06:16.220134 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 12:06:16.752295 sshd[6510]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:16.759807 systemd[1]: sshd@18-10.200.8.19:22-10.200.16.10:54272.service: Deactivated successfully. Jan 29 12:06:16.764340 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 12:06:16.768503 systemd-logind[1693]: Session 21 logged out. Waiting for processes to exit. Jan 29 12:06:16.769830 systemd-logind[1693]: Removed session 21. Jan 29 12:06:21.872443 systemd[1]: Started sshd@19-10.200.8.19:22-10.200.16.10:60944.service - OpenSSH per-connection server daemon (10.200.16.10:60944). Jan 29 12:06:22.518255 sshd[6523]: Accepted publickey for core from 10.200.16.10 port 60944 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:22.519852 sshd[6523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:22.525064 systemd-logind[1693]: New session 22 of user core. Jan 29 12:06:22.529330 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 12:06:23.045393 sshd[6523]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:23.049846 systemd[1]: sshd@19-10.200.8.19:22-10.200.16.10:60944.service: Deactivated successfully. Jan 29 12:06:23.051933 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 12:06:23.052936 systemd-logind[1693]: Session 22 logged out. Waiting for processes to exit. Jan 29 12:06:23.056626 systemd-logind[1693]: Removed session 22. Jan 29 12:06:23.163532 systemd[1]: Started sshd@20-10.200.8.19:22-10.200.16.10:60948.service - OpenSSH per-connection server daemon (10.200.16.10:60948). Jan 29 12:06:23.811951 sshd[6536]: Accepted publickey for core from 10.200.16.10 port 60948 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:23.815438 sshd[6536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:23.822777 systemd-logind[1693]: New session 23 of user core. Jan 29 12:06:23.826309 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 12:06:24.401723 sshd[6536]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:24.406340 systemd[1]: sshd@20-10.200.8.19:22-10.200.16.10:60948.service: Deactivated successfully. Jan 29 12:06:24.408699 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 12:06:24.409650 systemd-logind[1693]: Session 23 logged out. Waiting for processes to exit. Jan 29 12:06:24.410792 systemd-logind[1693]: Removed session 23. Jan 29 12:06:24.523309 systemd[1]: Started sshd@21-10.200.8.19:22-10.200.16.10:60952.service - OpenSSH per-connection server daemon (10.200.16.10:60952). Jan 29 12:06:25.168157 sshd[6547]: Accepted publickey for core from 10.200.16.10 port 60952 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:25.169742 sshd[6547]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:25.175143 systemd-logind[1693]: New session 24 of user core. Jan 29 12:06:25.180365 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 12:06:27.406006 sshd[6547]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:27.409613 systemd[1]: sshd@21-10.200.8.19:22-10.200.16.10:60952.service: Deactivated successfully. Jan 29 12:06:27.412738 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 12:06:27.414343 systemd-logind[1693]: Session 24 logged out. Waiting for processes to exit. Jan 29 12:06:27.415444 systemd-logind[1693]: Removed session 24. Jan 29 12:06:27.523806 systemd[1]: Started sshd@22-10.200.8.19:22-10.200.16.10:60222.service - OpenSSH per-connection server daemon (10.200.16.10:60222). Jan 29 12:06:28.180550 sshd[6567]: Accepted publickey for core from 10.200.16.10 port 60222 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:28.182266 sshd[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:28.187757 systemd-logind[1693]: New session 25 of user core. Jan 29 12:06:28.192216 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 29 12:06:28.810626 sshd[6567]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:28.815339 systemd-logind[1693]: Session 25 logged out. Waiting for processes to exit. Jan 29 12:06:28.816301 systemd[1]: sshd@22-10.200.8.19:22-10.200.16.10:60222.service: Deactivated successfully. Jan 29 12:06:28.818935 systemd[1]: session-25.scope: Deactivated successfully. Jan 29 12:06:28.819945 systemd-logind[1693]: Removed session 25. Jan 29 12:06:28.929352 systemd[1]: Started sshd@23-10.200.8.19:22-10.200.16.10:60224.service - OpenSSH per-connection server daemon (10.200.16.10:60224). Jan 29 12:06:29.582445 sshd[6578]: Accepted publickey for core from 10.200.16.10 port 60224 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:29.583942 sshd[6578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:29.588681 systemd-logind[1693]: New session 26 of user core. Jan 29 12:06:29.593147 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 29 12:06:30.096126 sshd[6578]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:30.100778 systemd[1]: sshd@23-10.200.8.19:22-10.200.16.10:60224.service: Deactivated successfully. Jan 29 12:06:30.103639 systemd[1]: session-26.scope: Deactivated successfully. Jan 29 12:06:30.104618 systemd-logind[1693]: Session 26 logged out. Waiting for processes to exit. Jan 29 12:06:30.105964 systemd-logind[1693]: Removed session 26. Jan 29 12:06:35.216334 systemd[1]: Started sshd@24-10.200.8.19:22-10.200.16.10:60230.service - OpenSSH per-connection server daemon (10.200.16.10:60230). Jan 29 12:06:35.864290 sshd[6592]: Accepted publickey for core from 10.200.16.10 port 60230 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:35.867238 sshd[6592]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:35.877167 systemd-logind[1693]: New session 27 of user core. Jan 29 12:06:35.881152 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 29 12:06:36.445516 sshd[6592]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:36.449541 systemd-logind[1693]: Session 27 logged out. Waiting for processes to exit. Jan 29 12:06:36.451811 systemd[1]: sshd@24-10.200.8.19:22-10.200.16.10:60230.service: Deactivated successfully. Jan 29 12:06:36.455876 systemd[1]: session-27.scope: Deactivated successfully. Jan 29 12:06:36.458122 systemd-logind[1693]: Removed session 27. Jan 29 12:06:41.563624 systemd[1]: Started sshd@25-10.200.8.19:22-10.200.16.10:35224.service - OpenSSH per-connection server daemon (10.200.16.10:35224). Jan 29 12:06:42.224787 sshd[6627]: Accepted publickey for core from 10.200.16.10 port 35224 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:42.226414 sshd[6627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:42.232801 systemd-logind[1693]: New session 28 of user core. Jan 29 12:06:42.241151 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 29 12:06:42.745053 sshd[6627]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:42.748536 systemd[1]: sshd@25-10.200.8.19:22-10.200.16.10:35224.service: Deactivated successfully. Jan 29 12:06:42.751694 systemd[1]: session-28.scope: Deactivated successfully. Jan 29 12:06:42.753756 systemd-logind[1693]: Session 28 logged out. Waiting for processes to exit. Jan 29 12:06:42.754860 systemd-logind[1693]: Removed session 28. Jan 29 12:06:45.356783 systemd[1]: run-containerd-runc-k8s.io-ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238-runc.4PgFO1.mount: Deactivated successfully. Jan 29 12:06:47.865334 systemd[1]: Started sshd@26-10.200.8.19:22-10.200.16.10:43512.service - OpenSSH per-connection server daemon (10.200.16.10:43512). Jan 29 12:06:48.519784 sshd[6669]: Accepted publickey for core from 10.200.16.10 port 43512 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:48.521793 sshd[6669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:48.528353 systemd-logind[1693]: New session 29 of user core. Jan 29 12:06:48.533238 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 29 12:06:49.036912 sshd[6669]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:49.042201 systemd[1]: sshd@26-10.200.8.19:22-10.200.16.10:43512.service: Deactivated successfully. Jan 29 12:06:49.045613 systemd[1]: session-29.scope: Deactivated successfully. Jan 29 12:06:49.046476 systemd-logind[1693]: Session 29 logged out. Waiting for processes to exit. Jan 29 12:06:49.047534 systemd-logind[1693]: Removed session 29. Jan 29 12:06:54.160336 systemd[1]: Started sshd@27-10.200.8.19:22-10.200.16.10:43524.service - OpenSSH per-connection server daemon (10.200.16.10:43524). Jan 29 12:06:54.805184 sshd[6682]: Accepted publickey for core from 10.200.16.10 port 43524 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:06:54.806886 sshd[6682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:06:54.812398 systemd-logind[1693]: New session 30 of user core. Jan 29 12:06:54.818185 systemd[1]: Started session-30.scope - Session 30 of User core. Jan 29 12:06:55.326942 sshd[6682]: pam_unix(sshd:session): session closed for user core Jan 29 12:06:55.331612 systemd-logind[1693]: Session 30 logged out. Waiting for processes to exit. Jan 29 12:06:55.332078 systemd[1]: sshd@27-10.200.8.19:22-10.200.16.10:43524.service: Deactivated successfully. Jan 29 12:06:55.334645 systemd[1]: session-30.scope: Deactivated successfully. Jan 29 12:06:55.335928 systemd-logind[1693]: Removed session 30. Jan 29 12:07:00.455305 systemd[1]: Started sshd@28-10.200.8.19:22-10.200.16.10:37516.service - OpenSSH per-connection server daemon (10.200.16.10:37516). Jan 29 12:07:01.101206 sshd[6698]: Accepted publickey for core from 10.200.16.10 port 37516 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:07:01.103342 sshd[6698]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:01.108422 systemd-logind[1693]: New session 31 of user core. Jan 29 12:07:01.113203 systemd[1]: Started session-31.scope - Session 31 of User core. Jan 29 12:07:01.618540 sshd[6698]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:01.623482 systemd[1]: sshd@28-10.200.8.19:22-10.200.16.10:37516.service: Deactivated successfully. Jan 29 12:07:01.625472 systemd[1]: session-31.scope: Deactivated successfully. Jan 29 12:07:01.626841 systemd-logind[1693]: Session 31 logged out. Waiting for processes to exit. Jan 29 12:07:01.627784 systemd-logind[1693]: Removed session 31. Jan 29 12:07:06.739657 systemd[1]: Started sshd@29-10.200.8.19:22-10.200.16.10:59514.service - OpenSSH per-connection server daemon (10.200.16.10:59514). Jan 29 12:07:07.387954 sshd[6740]: Accepted publickey for core from 10.200.16.10 port 59514 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:07:07.388671 sshd[6740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:07.393953 systemd-logind[1693]: New session 32 of user core. Jan 29 12:07:07.398141 systemd[1]: Started session-32.scope - Session 32 of User core. Jan 29 12:07:07.905816 sshd[6740]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:07.908740 systemd[1]: sshd@29-10.200.8.19:22-10.200.16.10:59514.service: Deactivated successfully. Jan 29 12:07:07.911350 systemd[1]: session-32.scope: Deactivated successfully. Jan 29 12:07:07.913365 systemd-logind[1693]: Session 32 logged out. Waiting for processes to exit. Jan 29 12:07:07.915228 systemd-logind[1693]: Removed session 32. Jan 29 12:07:13.025309 systemd[1]: Started sshd@30-10.200.8.19:22-10.200.16.10:59516.service - OpenSSH per-connection server daemon (10.200.16.10:59516). Jan 29 12:07:13.375444 systemd[1]: run-containerd-runc-k8s.io-ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238-runc.AVQ9Ku.mount: Deactivated successfully. Jan 29 12:07:13.677910 sshd[6760]: Accepted publickey for core from 10.200.16.10 port 59516 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:07:13.679933 sshd[6760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:13.685369 systemd-logind[1693]: New session 33 of user core. Jan 29 12:07:13.693153 systemd[1]: Started session-33.scope - Session 33 of User core. Jan 29 12:07:14.205467 sshd[6760]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:14.209464 systemd[1]: sshd@30-10.200.8.19:22-10.200.16.10:59516.service: Deactivated successfully. Jan 29 12:07:14.212955 systemd[1]: session-33.scope: Deactivated successfully. Jan 29 12:07:14.214697 systemd-logind[1693]: Session 33 logged out. Waiting for processes to exit. Jan 29 12:07:14.215922 systemd-logind[1693]: Removed session 33. Jan 29 12:07:15.359025 systemd[1]: run-containerd-runc-k8s.io-ef0f8ee74aa8c54780c3f62a8a463cd121039e84310bb601566835711afe5238-runc.7aPzFs.mount: Deactivated successfully. Jan 29 12:07:19.321294 systemd[1]: Started sshd@31-10.200.8.19:22-10.200.16.10:47844.service - OpenSSH per-connection server daemon (10.200.16.10:47844). Jan 29 12:07:19.970414 sshd[6822]: Accepted publickey for core from 10.200.16.10 port 47844 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:07:19.972066 sshd[6822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:19.976302 systemd-logind[1693]: New session 34 of user core. Jan 29 12:07:19.982131 systemd[1]: Started session-34.scope - Session 34 of User core. Jan 29 12:07:20.493335 sshd[6822]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:20.497397 systemd[1]: sshd@31-10.200.8.19:22-10.200.16.10:47844.service: Deactivated successfully. Jan 29 12:07:20.499720 systemd[1]: session-34.scope: Deactivated successfully. Jan 29 12:07:20.500669 systemd-logind[1693]: Session 34 logged out. Waiting for processes to exit. Jan 29 12:07:20.501884 systemd-logind[1693]: Removed session 34. Jan 29 12:07:25.619394 systemd[1]: Started sshd@32-10.200.8.19:22-10.200.16.10:47858.service - OpenSSH per-connection server daemon (10.200.16.10:47858). Jan 29 12:07:26.270497 sshd[6834]: Accepted publickey for core from 10.200.16.10 port 47858 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:07:26.272165 sshd[6834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:26.276399 systemd-logind[1693]: New session 35 of user core. Jan 29 12:07:26.280343 systemd[1]: Started session-35.scope - Session 35 of User core. Jan 29 12:07:26.800319 sshd[6834]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:26.805026 systemd[1]: sshd@32-10.200.8.19:22-10.200.16.10:47858.service: Deactivated successfully. Jan 29 12:07:26.807733 systemd[1]: session-35.scope: Deactivated successfully. Jan 29 12:07:26.809339 systemd-logind[1693]: Session 35 logged out. Waiting for processes to exit. Jan 29 12:07:26.810860 systemd-logind[1693]: Removed session 35. Jan 29 12:07:31.919297 systemd[1]: Started sshd@33-10.200.8.19:22-10.200.16.10:49284.service - OpenSSH per-connection server daemon (10.200.16.10:49284). Jan 29 12:07:32.571959 sshd[6850]: Accepted publickey for core from 10.200.16.10 port 49284 ssh2: RSA SHA256:M2tl2mAlrX1TJWryDGn0J6BxWUWnB/m2MaufQhrHc4Q Jan 29 12:07:32.574078 sshd[6850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 12:07:32.580083 systemd-logind[1693]: New session 36 of user core. Jan 29 12:07:32.587148 systemd[1]: Started session-36.scope - Session 36 of User core. Jan 29 12:07:33.096659 sshd[6850]: pam_unix(sshd:session): session closed for user core Jan 29 12:07:33.100922 systemd-logind[1693]: Session 36 logged out. Waiting for processes to exit. Jan 29 12:07:33.101651 systemd[1]: sshd@33-10.200.8.19:22-10.200.16.10:49284.service: Deactivated successfully. Jan 29 12:07:33.104271 systemd[1]: session-36.scope: Deactivated successfully. Jan 29 12:07:33.105951 systemd-logind[1693]: Removed session 36.