Jan 30 13:47:29.055061 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:47:29.055095 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:29.055108 kernel: BIOS-provided physical RAM map: Jan 30 13:47:29.055119 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:47:29.055128 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:47:29.055138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:47:29.055150 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 30 13:47:29.055164 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 30 13:47:29.055174 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:47:29.055184 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:47:29.055195 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:47:29.055205 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:47:29.055215 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:47:29.055226 kernel: NX (Execute Disable) protection: active Jan 30 13:47:29.055242 kernel: APIC: Static calls initialized Jan 30 13:47:29.055254 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:47:29.055266 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jan 30 13:47:29.055277 kernel: SMBIOS 3.1.0 present. Jan 30 13:47:29.055289 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:47:29.055301 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:47:29.055312 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:47:29.055324 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 30 13:47:29.055336 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:47:29.055347 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:47:29.055361 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:47:29.055373 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:47:29.055384 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:47:29.055397 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:47:29.055409 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:47:29.055420 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:47:29.055433 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:47:29.055513 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:47:29.055525 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:47:29.055540 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:47:29.055552 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:47:29.055563 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:47:29.055575 kernel: Using GB pages for direct mapping Jan 30 13:47:29.055587 kernel: Secure boot disabled Jan 30 13:47:29.055598 kernel: ACPI: Early table checksum verification disabled Jan 30 13:47:29.055611 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:47:29.055628 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055643 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055655 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:47:29.055667 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:47:29.055680 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055693 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055705 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055720 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055732 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055745 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055757 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055770 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:47:29.055782 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:47:29.055795 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:47:29.055807 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:47:29.055822 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:47:29.055835 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:47:29.055847 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:47:29.055860 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:47:29.055872 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:47:29.055884 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:47:29.055897 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:47:29.055909 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:47:29.055922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:47:29.055936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:47:29.055949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:47:29.055961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:47:29.055974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:47:29.055987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:47:29.055999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:47:29.056012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:47:29.056024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:47:29.056037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:47:29.056052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:47:29.056064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:47:29.056077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:47:29.056089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:47:29.056102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:47:29.056114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:47:29.056127 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:47:29.056139 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:47:29.056152 kernel: Zone ranges: Jan 30 13:47:29.056167 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:47:29.056179 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:47:29.056192 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:47:29.056204 kernel: Movable zone start for each node Jan 30 13:47:29.056217 kernel: Early memory node ranges Jan 30 13:47:29.056229 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:47:29.056241 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:47:29.056254 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:47:29.056267 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:47:29.056282 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:47:29.056295 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:47:29.056307 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:47:29.056320 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:47:29.056333 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:47:29.056346 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:47:29.056361 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:47:29.056375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:47:29.056394 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:47:29.056424 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:47:29.056477 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:47:29.056491 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:47:29.056504 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:47:29.056518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:47:29.056532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:47:29.056546 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:47:29.056559 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:47:29.056573 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:47:29.056590 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:47:29.056603 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:47:29.056619 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:29.056633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:47:29.056646 kernel: random: crng init done Jan 30 13:47:29.056660 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:47:29.056673 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:47:29.056687 kernel: Fallback order for Node 0: 0 Jan 30 13:47:29.056703 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:47:29.056727 kernel: Policy zone: Normal Jan 30 13:47:29.056741 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:47:29.056758 kernel: software IO TLB: area num 2. Jan 30 13:47:29.056773 kernel: Memory: 8077008K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310192K reserved, 0K cma-reserved) Jan 30 13:47:29.056787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:47:29.056802 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:47:29.056816 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:47:29.056830 kernel: Dynamic Preempt: voluntary Jan 30 13:47:29.056845 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:47:29.056861 kernel: rcu: RCU event tracing is enabled. Jan 30 13:47:29.056878 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:47:29.056893 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:47:29.056907 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:47:29.056922 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:47:29.056937 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:47:29.056954 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:47:29.056968 kernel: Using NULL legacy PIC Jan 30 13:47:29.056982 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:47:29.056997 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:47:29.057012 kernel: Console: colour dummy device 80x25 Jan 30 13:47:29.057026 kernel: printk: console [tty1] enabled Jan 30 13:47:29.057040 kernel: printk: console [ttyS0] enabled Jan 30 13:47:29.057055 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:47:29.057069 kernel: ACPI: Core revision 20230628 Jan 30 13:47:29.057083 kernel: Failed to register legacy timer interrupt Jan 30 13:47:29.057100 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:47:29.057115 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:47:29.057129 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:47:29.057144 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:47:29.057158 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:47:29.057173 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:47:29.057187 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:47:29.057202 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:47:29.057216 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:47:29.057234 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:47:29.057248 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:47:29.057263 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:47:29.057277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:47:29.057292 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:47:29.057306 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:47:29.057320 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:47:29.057334 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:47:29.057349 kernel: RETBleed: Vulnerable Jan 30 13:47:29.057365 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:47:29.057380 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:47:29.057394 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:47:29.057408 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:47:29.057422 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:47:29.057443 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:47:29.057458 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:47:29.057473 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:47:29.057487 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:47:29.057501 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:47:29.057516 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:47:29.057533 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:47:29.057548 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:47:29.057562 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:47:29.057577 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:47:29.057591 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:47:29.057606 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:47:29.057620 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:47:29.057635 kernel: landlock: Up and running. Jan 30 13:47:29.057649 kernel: SELinux: Initializing. Jan 30 13:47:29.057663 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.057678 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.057693 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:47:29.057710 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:29.057724 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:29.057740 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:29.057754 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:47:29.057769 kernel: signal: max sigframe size: 3632 Jan 30 13:47:29.057784 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:47:29.057798 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:47:29.057813 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:47:29.057827 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:47:29.057845 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:47:29.057859 kernel: .... node #0, CPUs: #1 Jan 30 13:47:29.057874 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:47:29.057889 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:47:29.057904 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:47:29.057919 kernel: smpboot: Max logical packages: 1 Jan 30 13:47:29.057933 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:47:29.057947 kernel: devtmpfs: initialized Jan 30 13:47:29.057965 kernel: x86/mm: Memory block size: 128MB Jan 30 13:47:29.057979 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:47:29.057994 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:47:29.058009 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:47:29.058023 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:47:29.058038 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:47:29.058053 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:47:29.058067 kernel: audit: type=2000 audit(1738244848.028:1): state=initialized audit_enabled=0 res=1 Jan 30 13:47:29.058082 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:47:29.058098 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:47:29.058113 kernel: cpuidle: using governor menu Jan 30 13:47:29.058128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:47:29.058143 kernel: dca service started, version 1.12.1 Jan 30 13:47:29.058157 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:47:29.058171 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:47:29.058186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:47:29.058201 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:47:29.058219 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:47:29.058236 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:47:29.058251 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:47:29.058265 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:47:29.058278 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:47:29.058290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:47:29.058302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:47:29.058316 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:47:29.058333 kernel: ACPI: Interpreter enabled Jan 30 13:47:29.058362 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:47:29.058396 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:47:29.058407 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:47:29.058420 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:47:29.060905 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:47:29.060936 kernel: iommu: Default domain type: Translated Jan 30 13:47:29.060950 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:47:29.060964 kernel: efivars: Registered efivars operations Jan 30 13:47:29.060977 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:47:29.060989 kernel: PCI: System does not support PCI Jan 30 13:47:29.061007 kernel: vgaarb: loaded Jan 30 13:47:29.061020 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:47:29.061032 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:47:29.061046 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:47:29.061059 kernel: pnp: PnP ACPI init Jan 30 13:47:29.061073 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:47:29.061088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:47:29.061101 kernel: NET: Registered PF_INET protocol family Jan 30 13:47:29.061114 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:47:29.061133 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:47:29.061147 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:47:29.061160 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:47:29.061174 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:47:29.061187 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:47:29.061199 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.061211 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.061222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:47:29.061232 kernel: NET: Registered PF_XDP protocol family Jan 30 13:47:29.061247 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:47:29.061257 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:47:29.061267 kernel: software IO TLB: mapped [mem 0x000000003ad8c000-0x000000003ed8c000] (64MB) Jan 30 13:47:29.061276 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:47:29.061285 kernel: Initialise system trusted keyrings Jan 30 13:47:29.061295 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:47:29.061304 kernel: Key type asymmetric registered Jan 30 13:47:29.061314 kernel: Asymmetric key parser 'x509' registered Jan 30 13:47:29.061322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:47:29.061335 kernel: io scheduler mq-deadline registered Jan 30 13:47:29.061343 kernel: io scheduler kyber registered Jan 30 13:47:29.061351 kernel: io scheduler bfq registered Jan 30 13:47:29.061359 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:47:29.061367 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:47:29.061375 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:47:29.061383 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:47:29.061391 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:47:29.061558 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:47:29.061660 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:47:28 UTC (1738244848) Jan 30 13:47:29.061752 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:47:29.061764 kernel: intel_pstate: CPU model not supported Jan 30 13:47:29.061775 kernel: efifb: probing for efifb Jan 30 13:47:29.061784 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:47:29.061794 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:47:29.061803 kernel: efifb: scrolling: redraw Jan 30 13:47:29.061817 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:47:29.061825 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:47:29.061837 kernel: fb0: EFI VGA frame buffer device Jan 30 13:47:29.061845 kernel: pstore: Using crash dump compression: deflate Jan 30 13:47:29.061856 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:47:29.061864 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:47:29.061876 kernel: Segment Routing with IPv6 Jan 30 13:47:29.061884 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:47:29.061896 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:47:29.061904 kernel: Key type dns_resolver registered Jan 30 13:47:29.061918 kernel: IPI shorthand broadcast: enabled Jan 30 13:47:29.061926 kernel: sched_clock: Marking stable (786003100, 40635700)->(1019873100, -193234300) Jan 30 13:47:29.061937 kernel: registered taskstats version 1 Jan 30 13:47:29.061945 kernel: Loading compiled-in X.509 certificates Jan 30 13:47:29.061957 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:47:29.061965 kernel: Key type .fscrypt registered Jan 30 13:47:29.061976 kernel: Key type fscrypt-provisioning registered Jan 30 13:47:29.061984 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:47:29.061997 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:47:29.062005 kernel: ima: No architecture policies found Jan 30 13:47:29.062016 kernel: clk: Disabling unused clocks Jan 30 13:47:29.062027 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:47:29.062035 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:47:29.062047 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:47:29.062055 kernel: Run /init as init process Jan 30 13:47:29.062066 kernel: with arguments: Jan 30 13:47:29.062074 kernel: /init Jan 30 13:47:29.062086 kernel: with environment: Jan 30 13:47:29.062096 kernel: HOME=/ Jan 30 13:47:29.062107 kernel: TERM=linux Jan 30 13:47:29.062115 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:47:29.062124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:29.062138 systemd[1]: Detected virtualization microsoft. Jan 30 13:47:29.062147 systemd[1]: Detected architecture x86-64. Jan 30 13:47:29.062158 systemd[1]: Running in initrd. Jan 30 13:47:29.062170 systemd[1]: No hostname configured, using default hostname. Jan 30 13:47:29.062180 systemd[1]: Hostname set to . Jan 30 13:47:29.062191 systemd[1]: Initializing machine ID from random generator. Jan 30 13:47:29.062201 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:47:29.062211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:29.062221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:29.062232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:47:29.062242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:29.062257 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:47:29.062265 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:47:29.062278 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:47:29.062287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:47:29.062299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:29.062308 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:29.062321 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:29.062336 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:29.062345 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:29.062355 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:29.062366 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:29.062377 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:29.062386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:29.062397 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:29.062407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:29.062418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:29.062431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:29.062453 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:29.062462 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:47:29.062473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:29.062482 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:47:29.062494 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:47:29.062502 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:29.062514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:29.062526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:29.062537 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:29.062564 systemd-journald[176]: Collecting audit messages is disabled. Jan 30 13:47:29.062588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:29.062601 systemd-journald[176]: Journal started Jan 30 13:47:29.062637 systemd-journald[176]: Runtime Journal (/run/log/journal/b146bf7c454041bfa18a57c63db7116f) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:47:29.059498 systemd-modules-load[177]: Inserted module 'overlay' Jan 30 13:47:29.074465 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:29.078902 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:47:29.091932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:29.099661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:29.112878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:47:29.113077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:29.123066 kernel: Bridge firewalling registered Jan 30 13:47:29.118590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:29.127603 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 30 13:47:29.128203 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:29.142584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:29.144170 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:29.144476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:29.147564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:29.170074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:29.174563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:29.182576 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:47:29.186979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:29.197570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:29.204179 dracut-cmdline[213]: dracut-dracut-053 Jan 30 13:47:29.207941 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:29.255991 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:47:29.256009 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:29.256069 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:29.282365 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:47:29.285818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:29.288586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:29.307457 kernel: SCSI subsystem initialized Jan 30 13:47:29.317455 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:47:29.328458 kernel: iscsi: registered transport (tcp) Jan 30 13:47:29.349459 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:47:29.349523 kernel: QLogic iSCSI HBA Driver Jan 30 13:47:29.385400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:29.394594 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:47:29.420666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:47:29.420749 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:47:29.423587 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:47:29.464466 kernel: raid6: avx512x4 gen() 18433 MB/s Jan 30 13:47:29.483453 kernel: raid6: avx512x2 gen() 18619 MB/s Jan 30 13:47:29.502446 kernel: raid6: avx512x1 gen() 18508 MB/s Jan 30 13:47:29.521451 kernel: raid6: avx2x4 gen() 18581 MB/s Jan 30 13:47:29.540449 kernel: raid6: avx2x2 gen() 18549 MB/s Jan 30 13:47:29.559878 kernel: raid6: avx2x1 gen() 13746 MB/s Jan 30 13:47:29.559912 kernel: raid6: using algorithm avx512x2 gen() 18619 MB/s Jan 30 13:47:29.581120 kernel: raid6: .... xor() 30360 MB/s, rmw enabled Jan 30 13:47:29.581157 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:47:29.603460 kernel: xor: automatically using best checksumming function avx Jan 30 13:47:29.755466 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:47:29.765359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:29.774674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:29.787889 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 30 13:47:29.792361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:29.804715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:47:29.820257 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 30 13:47:29.848046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:29.857591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:29.898157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:29.907604 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:47:29.932651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:29.939304 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:29.941425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:29.942204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:29.954225 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:47:29.975295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:29.998456 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:47:30.023039 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:30.041808 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:47:30.041839 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:47:30.041858 kernel: AES CTR mode by8 optimization enabled Jan 30 13:47:30.023269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:30.027365 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:30.030049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:30.030298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:30.050599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:30.066359 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:47:30.066404 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:47:30.069852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:30.082708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:30.084303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:30.099470 kernel: PTP clock support registered Jan 30 13:47:30.100590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:30.118460 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:47:30.122456 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:47:30.127632 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 13:47:30.127689 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:47:30.134103 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:47:30.134140 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:47:30.140603 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:47:30.140630 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:47:30.140653 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:47:30.142854 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:47:30.924730 systemd-resolved[217]: Clock change detected. Flushing caches. Jan 30 13:47:30.932670 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 13:47:30.932689 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:47:30.934646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:30.945181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:30.954981 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:47:30.960974 kernel: scsi host1: storvsc_host_t Jan 30 13:47:30.963968 kernel: scsi host0: storvsc_host_t Jan 30 13:47:30.969100 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:47:30.974979 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:47:30.987729 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:31.000592 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:47:31.003225 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:47:31.003252 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:47:31.015342 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:47:31.029379 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:47:31.029575 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:47:31.029746 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:47:31.029910 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:47:31.030098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:31.030120 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:47:31.156996 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: VF slot 1 added Jan 30 13:47:31.166353 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:47:31.166384 kernel: hv_pci 64957417-7164-4fe6-802e-85e9fd580a78: PCI VMBus probing: Using version 0x10004 Jan 30 13:47:31.209199 kernel: hv_pci 64957417-7164-4fe6-802e-85e9fd580a78: PCI host bridge to bus 7164:00 Jan 30 13:47:31.210421 kernel: pci_bus 7164:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:47:31.210606 kernel: pci_bus 7164:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:47:31.210752 kernel: pci 7164:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:47:31.210958 kernel: pci 7164:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:47:31.211144 kernel: pci 7164:00:02.0: enabling Extended Tags Jan 30 13:47:31.211309 kernel: pci 7164:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7164:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:47:31.211479 kernel: pci_bus 7164:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:47:31.211623 kernel: pci 7164:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:47:31.381057 kernel: mlx5_core 7164:00:02.0: enabling device (0000 -> 0002) Jan 30 13:47:31.619762 kernel: mlx5_core 7164:00:02.0: firmware version: 14.30.5000 Jan 30 13:47:31.620004 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: VF registering: eth1 Jan 30 13:47:31.620178 kernel: mlx5_core 7164:00:02.0 eth1: joined to eth0 Jan 30 13:47:31.620366 kernel: mlx5_core 7164:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:47:31.620529 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Jan 30 13:47:31.584024 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:47:31.628967 kernel: mlx5_core 7164:00:02.0 enP29028s1: renamed from eth1 Jan 30 13:47:31.649541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:47:31.664743 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:47:31.683927 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (442) Jan 30 13:47:31.697445 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:47:31.698892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:47:31.707510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:47:31.726077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:31.734995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:32.742985 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:32.744199 disk-uuid[600]: The operation has completed successfully. Jan 30 13:47:32.815910 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:47:32.816058 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:47:32.846100 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:47:32.853886 sh[686]: Success Jan 30 13:47:32.883162 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:47:33.087843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:47:33.099746 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:47:33.104508 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:47:33.123126 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:47:33.123173 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:33.126425 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:47:33.128938 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:47:33.131210 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:47:33.551613 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:47:33.556361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:47:33.567106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:47:33.573512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:47:33.589002 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:33.589053 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:33.589083 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:33.609998 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:33.624987 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:33.625016 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:47:33.634410 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:47:33.643221 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:47:33.676168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:33.684215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:33.705163 systemd-networkd[870]: lo: Link UP Jan 30 13:47:33.705173 systemd-networkd[870]: lo: Gained carrier Jan 30 13:47:33.707457 systemd-networkd[870]: Enumeration completed Jan 30 13:47:33.707692 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:33.710911 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:33.710917 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:33.714965 systemd[1]: Reached target network.target - Network. Jan 30 13:47:33.778973 kernel: mlx5_core 7164:00:02.0 enP29028s1: Link up Jan 30 13:47:33.817985 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: Data path switched to VF: enP29028s1 Jan 30 13:47:33.818779 systemd-networkd[870]: enP29028s1: Link UP Jan 30 13:47:33.818934 systemd-networkd[870]: eth0: Link UP Jan 30 13:47:33.819166 systemd-networkd[870]: eth0: Gained carrier Jan 30 13:47:33.819181 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:33.824167 systemd-networkd[870]: enP29028s1: Gained carrier Jan 30 13:47:33.851067 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:47:34.355616 ignition[820]: Ignition 2.19.0 Jan 30 13:47:34.355630 ignition[820]: Stage: fetch-offline Jan 30 13:47:34.355688 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.355701 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.355822 ignition[820]: parsed url from cmdline: "" Jan 30 13:47:34.355827 ignition[820]: no config URL provided Jan 30 13:47:34.355833 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:34.355844 ignition[820]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:34.355851 ignition[820]: failed to fetch config: resource requires networking Jan 30 13:47:34.357300 ignition[820]: Ignition finished successfully Jan 30 13:47:34.373750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:34.383238 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:47:34.397152 ignition[878]: Ignition 2.19.0 Jan 30 13:47:34.397162 ignition[878]: Stage: fetch Jan 30 13:47:34.397380 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.397394 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.397485 ignition[878]: parsed url from cmdline: "" Jan 30 13:47:34.397489 ignition[878]: no config URL provided Jan 30 13:47:34.397496 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:34.397503 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:34.397527 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:47:34.486716 ignition[878]: GET result: OK Jan 30 13:47:34.486820 ignition[878]: config has been read from IMDS userdata Jan 30 13:47:34.486859 ignition[878]: parsing config with SHA512: 8962fcfeb70bd7875d3dfcfbf254a2677f81589fe1a85e667fe3c550a1431f4fe79e2f68fcf6be695053ff54a71a7db563ad0ae83d66a6208079830b3436dfce Jan 30 13:47:34.495832 unknown[878]: fetched base config from "system" Jan 30 13:47:34.495867 unknown[878]: fetched base config from "system" Jan 30 13:47:34.497801 ignition[878]: fetch: fetch complete Jan 30 13:47:34.495880 unknown[878]: fetched user config from "azure" Jan 30 13:47:34.497810 ignition[878]: fetch: fetch passed Jan 30 13:47:34.497875 ignition[878]: Ignition finished successfully Jan 30 13:47:34.508025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:47:34.516116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:47:34.530747 ignition[885]: Ignition 2.19.0 Jan 30 13:47:34.530758 ignition[885]: Stage: kargs Jan 30 13:47:34.530978 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.533751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:47:34.530991 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.531903 ignition[885]: kargs: kargs passed Jan 30 13:47:34.531961 ignition[885]: Ignition finished successfully Jan 30 13:47:34.545153 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:47:34.563282 ignition[891]: Ignition 2.19.0 Jan 30 13:47:34.563293 ignition[891]: Stage: disks Jan 30 13:47:34.565188 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:47:34.563517 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.569211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:34.563529 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.573376 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:34.564386 ignition[891]: disks: disks passed Jan 30 13:47:34.576205 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:34.564428 ignition[891]: Ignition finished successfully Jan 30 13:47:34.582378 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:34.597128 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:34.610105 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:47:34.664303 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:47:34.669383 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:47:34.683085 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:47:34.775965 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:47:34.776215 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:47:34.778246 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:47:34.819053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:34.823625 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:47:34.839100 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (910) Jan 30 13:47:34.839162 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:34.840139 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:47:34.853266 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:34.853294 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:34.853310 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:34.853028 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:47:34.853069 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:34.862237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:34.867143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:47:34.879135 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:47:34.911163 systemd-networkd[870]: eth0: Gained IPv6LL Jan 30 13:47:35.295194 systemd-networkd[870]: enP29028s1: Gained IPv6LL Jan 30 13:47:35.449252 coreos-metadata[912]: Jan 30 13:47:35.449 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:47:35.453264 coreos-metadata[912]: Jan 30 13:47:35.451 INFO Fetch successful Jan 30 13:47:35.453264 coreos-metadata[912]: Jan 30 13:47:35.451 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:47:35.464648 coreos-metadata[912]: Jan 30 13:47:35.464 INFO Fetch successful Jan 30 13:47:35.481338 coreos-metadata[912]: Jan 30 13:47:35.481 INFO wrote hostname ci-4081.3.0-a-95297e853e to /sysroot/etc/hostname Jan 30 13:47:35.483235 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:47:35.567805 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:47:35.604662 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:47:35.625204 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:47:35.644134 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:47:36.574439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:36.582067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:47:36.589119 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:47:36.600539 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:47:36.605558 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:36.625118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:47:36.633939 ignition[1028]: INFO : Ignition 2.19.0 Jan 30 13:47:36.633939 ignition[1028]: INFO : Stage: mount Jan 30 13:47:36.640177 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:36.640177 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:36.640177 ignition[1028]: INFO : mount: mount passed Jan 30 13:47:36.640177 ignition[1028]: INFO : Ignition finished successfully Jan 30 13:47:36.635997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:47:36.654294 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:47:36.663167 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:36.676965 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1040) Jan 30 13:47:36.676997 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:36.680965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:36.685078 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:36.689963 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:36.691662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:36.716496 ignition[1057]: INFO : Ignition 2.19.0 Jan 30 13:47:36.716496 ignition[1057]: INFO : Stage: files Jan 30 13:47:36.720346 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:36.720346 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:36.720346 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:47:36.746173 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:47:36.746173 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:47:36.856561 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:47:36.860302 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:47:36.860302 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:47:36.857096 unknown[1057]: wrote ssh authorized keys file for user: core Jan 30 13:47:36.899739 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:36.904999 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:36.904999 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:36.904999 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:47:36.959306 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:47:37.111361 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:37.111361 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:37.121691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:37.121691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:37.130139 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:37.130139 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:37.138248 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:37.138248 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:37.146848 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:37.151209 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:47:37.678486 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:47:37.990357 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.990357 ignition[1057]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:47:38.005589 ignition[1057]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: files passed Jan 30 13:47:38.012979 ignition[1057]: INFO : Ignition finished successfully Jan 30 13:47:38.007726 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:47:38.032570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:47:38.057371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:47:38.060339 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:47:38.060433 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:47:38.080450 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:38.080450 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:38.087544 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:38.091826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:38.094836 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:47:38.106102 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:47:38.135850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:47:38.135999 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:47:38.140877 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:47:38.148822 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:47:38.153536 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:47:38.165177 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:47:38.178686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:38.187240 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:47:38.198489 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:38.203499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:38.206261 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:47:38.212851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:47:38.213036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:38.220075 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:47:38.224898 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:47:38.228737 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:47:38.231076 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:38.238050 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:38.240719 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:47:38.245120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:38.247868 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:47:38.252205 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:47:38.261158 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:47:38.264498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:47:38.264644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:38.271619 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:38.276258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:38.278838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:47:38.283372 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:38.286039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:47:38.286161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:38.295894 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:47:38.296092 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:38.301434 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:47:38.301582 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:47:38.310422 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:47:38.310558 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:47:38.323154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:47:38.330197 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:47:38.335066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:47:38.335258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:38.341490 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:47:38.349534 ignition[1109]: INFO : Ignition 2.19.0 Jan 30 13:47:38.349534 ignition[1109]: INFO : Stage: umount Jan 30 13:47:38.349534 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:38.349534 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:38.349534 ignition[1109]: INFO : umount: umount passed Jan 30 13:47:38.349534 ignition[1109]: INFO : Ignition finished successfully Jan 30 13:47:38.341660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:38.353635 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:47:38.353740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:47:38.360075 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:47:38.360369 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:47:38.360935 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:47:38.361276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:47:38.361617 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:47:38.361711 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:47:38.362004 systemd[1]: Stopped target network.target - Network. Jan 30 13:47:38.362392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:47:38.362484 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:38.362873 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:47:38.365481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:47:38.389005 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:38.393021 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:47:38.394986 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:47:38.402118 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:47:38.402174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:38.404959 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:47:38.405006 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:38.408631 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:47:38.408692 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:47:38.411484 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:47:38.411531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:38.411957 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:47:38.412216 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:47:38.412858 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:47:38.413466 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:47:38.438121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:47:38.442014 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 30 13:47:38.445187 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:47:38.445284 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:47:38.450820 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:47:38.450941 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:47:38.469728 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:47:38.469852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:47:38.481727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:47:38.481804 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:38.487246 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:47:38.487301 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:38.516087 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:47:38.518067 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:47:38.518138 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:38.525600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:47:38.525655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:38.530799 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:47:38.530855 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:38.535404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:47:38.537679 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:38.549741 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:38.570574 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:47:38.570743 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:38.576807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:47:38.576854 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:38.583116 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:47:38.585789 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:38.591896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:47:38.591970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:38.596167 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:47:38.596213 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:38.600976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:38.601022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:38.616142 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:47:38.621794 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: Data path switched from VF: enP29028s1 Jan 30 13:47:38.621915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:47:38.621988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:38.627069 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:47:38.627128 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:38.638433 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:47:38.638496 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:38.643535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:38.643582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:38.649065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:47:38.649434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:47:38.653929 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:47:38.654096 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:47:38.658870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:47:38.672190 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:47:38.783576 systemd[1]: Switching root. Jan 30 13:47:38.815920 systemd-journald[176]: Journal stopped Jan 30 13:47:29.055061 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:47:29.055095 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:29.055108 kernel: BIOS-provided physical RAM map: Jan 30 13:47:29.055119 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 30 13:47:29.055128 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 30 13:47:29.055138 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 30 13:47:29.055150 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 30 13:47:29.055164 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 30 13:47:29.055174 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 30 13:47:29.055184 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 30 13:47:29.055195 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 30 13:47:29.055205 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 30 13:47:29.055215 kernel: printk: bootconsole [earlyser0] enabled Jan 30 13:47:29.055226 kernel: NX (Execute Disable) protection: active Jan 30 13:47:29.055242 kernel: APIC: Static calls initialized Jan 30 13:47:29.055254 kernel: efi: EFI v2.7 by Microsoft Jan 30 13:47:29.055266 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Jan 30 13:47:29.055277 kernel: SMBIOS 3.1.0 present. Jan 30 13:47:29.055289 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 30 13:47:29.055301 kernel: Hypervisor detected: Microsoft Hyper-V Jan 30 13:47:29.055312 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 30 13:47:29.055324 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 30 13:47:29.055336 kernel: Hyper-V: Nested features: 0x1e0101 Jan 30 13:47:29.055347 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 30 13:47:29.055361 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 30 13:47:29.055373 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:47:29.055384 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 30 13:47:29.055397 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 30 13:47:29.055409 kernel: tsc: Detected 2593.906 MHz processor Jan 30 13:47:29.055420 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:47:29.055433 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:47:29.055513 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 30 13:47:29.055525 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 30 13:47:29.055540 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:47:29.055552 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 30 13:47:29.055563 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 30 13:47:29.055575 kernel: Using GB pages for direct mapping Jan 30 13:47:29.055587 kernel: Secure boot disabled Jan 30 13:47:29.055598 kernel: ACPI: Early table checksum verification disabled Jan 30 13:47:29.055611 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 30 13:47:29.055628 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055643 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055655 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 30 13:47:29.055667 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 30 13:47:29.055680 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055693 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055705 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055720 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055732 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055745 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055757 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 30 13:47:29.055770 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 30 13:47:29.055782 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 30 13:47:29.055795 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 30 13:47:29.055807 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 30 13:47:29.055822 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 30 13:47:29.055835 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 30 13:47:29.055847 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 30 13:47:29.055860 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 30 13:47:29.055872 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 30 13:47:29.055884 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 30 13:47:29.055897 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:47:29.055909 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 30 13:47:29.055922 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 30 13:47:29.055936 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 30 13:47:29.055949 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 30 13:47:29.055961 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 30 13:47:29.055974 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 30 13:47:29.055987 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 30 13:47:29.055999 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 30 13:47:29.056012 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 30 13:47:29.056024 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 30 13:47:29.056037 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 30 13:47:29.056052 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 30 13:47:29.056064 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 30 13:47:29.056077 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 30 13:47:29.056089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 30 13:47:29.056102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 30 13:47:29.056114 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 30 13:47:29.056127 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 30 13:47:29.056139 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 30 13:47:29.056152 kernel: Zone ranges: Jan 30 13:47:29.056167 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:47:29.056179 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 30 13:47:29.056192 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:47:29.056204 kernel: Movable zone start for each node Jan 30 13:47:29.056217 kernel: Early memory node ranges Jan 30 13:47:29.056229 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 30 13:47:29.056241 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 30 13:47:29.056254 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 30 13:47:29.056267 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 30 13:47:29.056282 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 30 13:47:29.056295 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:47:29.056307 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 30 13:47:29.056320 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 30 13:47:29.056333 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 30 13:47:29.056346 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 30 13:47:29.056361 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:47:29.056375 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 30 13:47:29.056394 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:47:29.056424 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 30 13:47:29.056477 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 30 13:47:29.056491 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 30 13:47:29.056504 kernel: Booting paravirtualized kernel on Hyper-V Jan 30 13:47:29.056518 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:47:29.056532 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 30 13:47:29.056546 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 30 13:47:29.056559 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 30 13:47:29.056573 kernel: pcpu-alloc: [0] 0 1 Jan 30 13:47:29.056590 kernel: Hyper-V: PV spinlocks enabled Jan 30 13:47:29.056603 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 30 13:47:29.056619 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:29.056633 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:47:29.056646 kernel: random: crng init done Jan 30 13:47:29.056660 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 30 13:47:29.056673 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 30 13:47:29.056687 kernel: Fallback order for Node 0: 0 Jan 30 13:47:29.056703 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 30 13:47:29.056727 kernel: Policy zone: Normal Jan 30 13:47:29.056741 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:47:29.056758 kernel: software IO TLB: area num 2. Jan 30 13:47:29.056773 kernel: Memory: 8077008K/8387460K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 310192K reserved, 0K cma-reserved) Jan 30 13:47:29.056787 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 30 13:47:29.056802 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:47:29.056816 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:47:29.056830 kernel: Dynamic Preempt: voluntary Jan 30 13:47:29.056845 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:47:29.056861 kernel: rcu: RCU event tracing is enabled. Jan 30 13:47:29.056878 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 30 13:47:29.056893 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:47:29.056907 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:47:29.056922 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:47:29.056937 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:47:29.056954 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 30 13:47:29.056968 kernel: Using NULL legacy PIC Jan 30 13:47:29.056982 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 30 13:47:29.056997 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 30 13:47:29.057012 kernel: Console: colour dummy device 80x25 Jan 30 13:47:29.057026 kernel: printk: console [tty1] enabled Jan 30 13:47:29.057040 kernel: printk: console [ttyS0] enabled Jan 30 13:47:29.057055 kernel: printk: bootconsole [earlyser0] disabled Jan 30 13:47:29.057069 kernel: ACPI: Core revision 20230628 Jan 30 13:47:29.057083 kernel: Failed to register legacy timer interrupt Jan 30 13:47:29.057100 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:47:29.057115 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 30 13:47:29.057129 kernel: Hyper-V: Using IPI hypercalls Jan 30 13:47:29.057144 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 30 13:47:29.057158 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 30 13:47:29.057173 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 30 13:47:29.057187 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 30 13:47:29.057202 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 30 13:47:29.057216 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 30 13:47:29.057234 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 30 13:47:29.057248 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:47:29.057263 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:47:29.057277 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:47:29.057292 kernel: Spectre V2 : Mitigation: Retpolines Jan 30 13:47:29.057306 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:47:29.057320 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 30 13:47:29.057334 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 30 13:47:29.057349 kernel: RETBleed: Vulnerable Jan 30 13:47:29.057365 kernel: Speculative Store Bypass: Vulnerable Jan 30 13:47:29.057380 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:47:29.057394 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:47:29.057408 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:47:29.057422 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:47:29.057443 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:47:29.057458 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:47:29.057473 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 30 13:47:29.057487 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 30 13:47:29.057501 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 30 13:47:29.057516 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:47:29.057533 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 30 13:47:29.057548 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 30 13:47:29.057562 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 30 13:47:29.057577 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 30 13:47:29.057591 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:47:29.057606 kernel: pid_max: default: 32768 minimum: 301 Jan 30 13:47:29.057620 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:47:29.057635 kernel: landlock: Up and running. Jan 30 13:47:29.057649 kernel: SELinux: Initializing. Jan 30 13:47:29.057663 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.057678 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.057693 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 30 13:47:29.057710 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:29.057724 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:29.057740 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 30 13:47:29.057754 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 30 13:47:29.057769 kernel: signal: max sigframe size: 3632 Jan 30 13:47:29.057784 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:47:29.057798 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:47:29.057813 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:47:29.057827 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:47:29.057845 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:47:29.057859 kernel: .... node #0, CPUs: #1 Jan 30 13:47:29.057874 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 30 13:47:29.057889 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 30 13:47:29.057904 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:47:29.057919 kernel: smpboot: Max logical packages: 1 Jan 30 13:47:29.057933 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 30 13:47:29.057947 kernel: devtmpfs: initialized Jan 30 13:47:29.057965 kernel: x86/mm: Memory block size: 128MB Jan 30 13:47:29.057979 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 30 13:47:29.057994 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:47:29.058009 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 30 13:47:29.058023 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:47:29.058038 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:47:29.058053 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:47:29.058067 kernel: audit: type=2000 audit(1738244848.028:1): state=initialized audit_enabled=0 res=1 Jan 30 13:47:29.058082 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:47:29.058098 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:47:29.058113 kernel: cpuidle: using governor menu Jan 30 13:47:29.058128 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:47:29.058143 kernel: dca service started, version 1.12.1 Jan 30 13:47:29.058157 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 30 13:47:29.058171 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:47:29.058186 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:47:29.058201 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:47:29.058219 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:47:29.058236 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:47:29.058251 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:47:29.058265 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:47:29.058278 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:47:29.058290 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:47:29.058302 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:47:29.058316 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:47:29.058333 kernel: ACPI: Interpreter enabled Jan 30 13:47:29.058362 kernel: ACPI: PM: (supports S0 S5) Jan 30 13:47:29.058396 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:47:29.058407 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:47:29.058420 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 30 13:47:29.060905 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 30 13:47:29.060936 kernel: iommu: Default domain type: Translated Jan 30 13:47:29.060950 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:47:29.060964 kernel: efivars: Registered efivars operations Jan 30 13:47:29.060977 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:47:29.060989 kernel: PCI: System does not support PCI Jan 30 13:47:29.061007 kernel: vgaarb: loaded Jan 30 13:47:29.061020 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 30 13:47:29.061032 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:47:29.061046 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:47:29.061059 kernel: pnp: PnP ACPI init Jan 30 13:47:29.061073 kernel: pnp: PnP ACPI: found 3 devices Jan 30 13:47:29.061088 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:47:29.061101 kernel: NET: Registered PF_INET protocol family Jan 30 13:47:29.061114 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:47:29.061133 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 30 13:47:29.061147 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:47:29.061160 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 30 13:47:29.061174 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 30 13:47:29.061187 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 30 13:47:29.061199 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.061211 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 30 13:47:29.061222 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:47:29.061232 kernel: NET: Registered PF_XDP protocol family Jan 30 13:47:29.061247 kernel: PCI: CLS 0 bytes, default 64 Jan 30 13:47:29.061257 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 30 13:47:29.061267 kernel: software IO TLB: mapped [mem 0x000000003ad8c000-0x000000003ed8c000] (64MB) Jan 30 13:47:29.061276 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:47:29.061285 kernel: Initialise system trusted keyrings Jan 30 13:47:29.061295 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 30 13:47:29.061304 kernel: Key type asymmetric registered Jan 30 13:47:29.061314 kernel: Asymmetric key parser 'x509' registered Jan 30 13:47:29.061322 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:47:29.061335 kernel: io scheduler mq-deadline registered Jan 30 13:47:29.061343 kernel: io scheduler kyber registered Jan 30 13:47:29.061351 kernel: io scheduler bfq registered Jan 30 13:47:29.061359 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:47:29.061367 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:47:29.061375 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:47:29.061383 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 30 13:47:29.061391 kernel: i8042: PNP: No PS/2 controller found. Jan 30 13:47:29.061558 kernel: rtc_cmos 00:02: registered as rtc0 Jan 30 13:47:29.061660 kernel: rtc_cmos 00:02: setting system clock to 2025-01-30T13:47:28 UTC (1738244848) Jan 30 13:47:29.061752 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 30 13:47:29.061764 kernel: intel_pstate: CPU model not supported Jan 30 13:47:29.061775 kernel: efifb: probing for efifb Jan 30 13:47:29.061784 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 30 13:47:29.061794 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 30 13:47:29.061803 kernel: efifb: scrolling: redraw Jan 30 13:47:29.061817 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 30 13:47:29.061825 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:47:29.061837 kernel: fb0: EFI VGA frame buffer device Jan 30 13:47:29.061845 kernel: pstore: Using crash dump compression: deflate Jan 30 13:47:29.061856 kernel: pstore: Registered efi_pstore as persistent store backend Jan 30 13:47:29.061864 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:47:29.061876 kernel: Segment Routing with IPv6 Jan 30 13:47:29.061884 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:47:29.061896 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:47:29.061904 kernel: Key type dns_resolver registered Jan 30 13:47:29.061918 kernel: IPI shorthand broadcast: enabled Jan 30 13:47:29.061926 kernel: sched_clock: Marking stable (786003100, 40635700)->(1019873100, -193234300) Jan 30 13:47:29.061937 kernel: registered taskstats version 1 Jan 30 13:47:29.061945 kernel: Loading compiled-in X.509 certificates Jan 30 13:47:29.061957 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:47:29.061965 kernel: Key type .fscrypt registered Jan 30 13:47:29.061976 kernel: Key type fscrypt-provisioning registered Jan 30 13:47:29.061984 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:47:29.061997 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:47:29.062005 kernel: ima: No architecture policies found Jan 30 13:47:29.062016 kernel: clk: Disabling unused clocks Jan 30 13:47:29.062027 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:47:29.062035 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:47:29.062047 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:47:29.062055 kernel: Run /init as init process Jan 30 13:47:29.062066 kernel: with arguments: Jan 30 13:47:29.062074 kernel: /init Jan 30 13:47:29.062086 kernel: with environment: Jan 30 13:47:29.062096 kernel: HOME=/ Jan 30 13:47:29.062107 kernel: TERM=linux Jan 30 13:47:29.062115 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:47:29.062124 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:29.062138 systemd[1]: Detected virtualization microsoft. Jan 30 13:47:29.062147 systemd[1]: Detected architecture x86-64. Jan 30 13:47:29.062158 systemd[1]: Running in initrd. Jan 30 13:47:29.062170 systemd[1]: No hostname configured, using default hostname. Jan 30 13:47:29.062180 systemd[1]: Hostname set to . Jan 30 13:47:29.062191 systemd[1]: Initializing machine ID from random generator. Jan 30 13:47:29.062201 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:47:29.062211 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:29.062221 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:29.062232 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:47:29.062242 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:29.062257 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:47:29.062265 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:47:29.062278 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:47:29.062287 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:47:29.062299 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:29.062308 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:29.062321 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:29.062336 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:29.062345 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:29.062355 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:29.062366 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:29.062377 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:29.062386 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:29.062397 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:29.062407 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:29.062418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:29.062431 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:29.062453 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:29.062462 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:47:29.062473 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:29.062482 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:47:29.062494 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:47:29.062502 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:29.062514 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:29.062526 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:29.062537 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:29.062564 systemd-journald[176]: Collecting audit messages is disabled. Jan 30 13:47:29.062588 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:29.062601 systemd-journald[176]: Journal started Jan 30 13:47:29.062637 systemd-journald[176]: Runtime Journal (/run/log/journal/b146bf7c454041bfa18a57c63db7116f) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:47:29.059498 systemd-modules-load[177]: Inserted module 'overlay' Jan 30 13:47:29.074465 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:29.078902 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:47:29.091932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:29.099661 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:29.112878 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:47:29.113077 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:29.123066 kernel: Bridge firewalling registered Jan 30 13:47:29.118590 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:29.127603 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 30 13:47:29.128203 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:29.142584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:29.144170 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:29.144476 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:29.147564 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:29.170074 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:29.174563 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:29.182576 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:47:29.186979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:29.197570 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:29.204179 dracut-cmdline[213]: dracut-dracut-053 Jan 30 13:47:29.207941 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:47:29.255991 systemd-resolved[217]: Positive Trust Anchors: Jan 30 13:47:29.256009 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:29.256069 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:29.282365 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 30 13:47:29.285818 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:29.288586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:29.307457 kernel: SCSI subsystem initialized Jan 30 13:47:29.317455 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:47:29.328458 kernel: iscsi: registered transport (tcp) Jan 30 13:47:29.349459 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:47:29.349523 kernel: QLogic iSCSI HBA Driver Jan 30 13:47:29.385400 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:29.394594 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:47:29.420666 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:47:29.420749 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:47:29.423587 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:47:29.464466 kernel: raid6: avx512x4 gen() 18433 MB/s Jan 30 13:47:29.483453 kernel: raid6: avx512x2 gen() 18619 MB/s Jan 30 13:47:29.502446 kernel: raid6: avx512x1 gen() 18508 MB/s Jan 30 13:47:29.521451 kernel: raid6: avx2x4 gen() 18581 MB/s Jan 30 13:47:29.540449 kernel: raid6: avx2x2 gen() 18549 MB/s Jan 30 13:47:29.559878 kernel: raid6: avx2x1 gen() 13746 MB/s Jan 30 13:47:29.559912 kernel: raid6: using algorithm avx512x2 gen() 18619 MB/s Jan 30 13:47:29.581120 kernel: raid6: .... xor() 30360 MB/s, rmw enabled Jan 30 13:47:29.581157 kernel: raid6: using avx512x2 recovery algorithm Jan 30 13:47:29.603460 kernel: xor: automatically using best checksumming function avx Jan 30 13:47:29.755466 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:47:29.765359 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:29.774674 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:29.787889 systemd-udevd[398]: Using default interface naming scheme 'v255'. Jan 30 13:47:29.792361 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:29.804715 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:47:29.820257 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Jan 30 13:47:29.848046 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:29.857591 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:29.898157 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:29.907604 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:47:29.932651 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:29.939304 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:29.941425 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:29.942204 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:29.954225 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:47:29.975295 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:29.998456 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:47:30.023039 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:30.041808 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:47:30.041839 kernel: hv_vmbus: Vmbus version:5.2 Jan 30 13:47:30.041858 kernel: AES CTR mode by8 optimization enabled Jan 30 13:47:30.023269 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:30.027365 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:30.030049 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:30.030298 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:30.050599 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:30.066359 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 30 13:47:30.066404 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 30 13:47:30.069852 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:30.082708 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:30.084303 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:30.099470 kernel: PTP clock support registered Jan 30 13:47:30.100590 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:30.118460 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 30 13:47:30.122456 kernel: hv_vmbus: registering driver hid_hyperv Jan 30 13:47:30.127632 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input0 Jan 30 13:47:30.127689 kernel: hv_utils: Registering HyperV Utility Driver Jan 30 13:47:30.134103 kernel: hv_vmbus: registering driver hv_utils Jan 30 13:47:30.134140 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 30 13:47:30.140603 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 30 13:47:30.140630 kernel: hv_utils: Heartbeat IC version 3.0 Jan 30 13:47:30.140653 kernel: hv_utils: Shutdown IC version 3.2 Jan 30 13:47:30.142854 kernel: hv_utils: TimeSync IC version 4.0 Jan 30 13:47:30.924730 systemd-resolved[217]: Clock change detected. Flushing caches. Jan 30 13:47:30.932670 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input1 Jan 30 13:47:30.932689 kernel: hv_vmbus: registering driver hv_netvsc Jan 30 13:47:30.934646 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:30.945181 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:47:30.954981 kernel: hv_vmbus: registering driver hv_storvsc Jan 30 13:47:30.960974 kernel: scsi host1: storvsc_host_t Jan 30 13:47:30.963968 kernel: scsi host0: storvsc_host_t Jan 30 13:47:30.969100 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 30 13:47:30.974979 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 30 13:47:30.987729 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:31.000592 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 30 13:47:31.003225 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:47:31.003252 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 30 13:47:31.015342 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 30 13:47:31.029379 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 30 13:47:31.029575 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:47:31.029746 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 30 13:47:31.029910 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 30 13:47:31.030098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:31.030120 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:47:31.156996 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: VF slot 1 added Jan 30 13:47:31.166353 kernel: hv_vmbus: registering driver hv_pci Jan 30 13:47:31.166384 kernel: hv_pci 64957417-7164-4fe6-802e-85e9fd580a78: PCI VMBus probing: Using version 0x10004 Jan 30 13:47:31.209199 kernel: hv_pci 64957417-7164-4fe6-802e-85e9fd580a78: PCI host bridge to bus 7164:00 Jan 30 13:47:31.210421 kernel: pci_bus 7164:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 30 13:47:31.210606 kernel: pci_bus 7164:00: No busn resource found for root bus, will use [bus 00-ff] Jan 30 13:47:31.210752 kernel: pci 7164:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 30 13:47:31.210958 kernel: pci 7164:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:47:31.211144 kernel: pci 7164:00:02.0: enabling Extended Tags Jan 30 13:47:31.211309 kernel: pci 7164:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 7164:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 30 13:47:31.211479 kernel: pci_bus 7164:00: busn_res: [bus 00-ff] end is updated to 00 Jan 30 13:47:31.211623 kernel: pci 7164:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 30 13:47:31.381057 kernel: mlx5_core 7164:00:02.0: enabling device (0000 -> 0002) Jan 30 13:47:31.619762 kernel: mlx5_core 7164:00:02.0: firmware version: 14.30.5000 Jan 30 13:47:31.620004 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: VF registering: eth1 Jan 30 13:47:31.620178 kernel: mlx5_core 7164:00:02.0 eth1: joined to eth0 Jan 30 13:47:31.620366 kernel: mlx5_core 7164:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 30 13:47:31.620529 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Jan 30 13:47:31.584024 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 30 13:47:31.628967 kernel: mlx5_core 7164:00:02.0 enP29028s1: renamed from eth1 Jan 30 13:47:31.649541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:47:31.664743 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 30 13:47:31.683927 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (442) Jan 30 13:47:31.697445 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 30 13:47:31.698892 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 30 13:47:31.707510 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:47:31.726077 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:31.734995 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:32.742985 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:47:32.744199 disk-uuid[600]: The operation has completed successfully. Jan 30 13:47:32.815910 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:47:32.816058 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:47:32.846100 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:47:32.853886 sh[686]: Success Jan 30 13:47:32.883162 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:47:33.087843 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:47:33.099746 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:47:33.104508 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:47:33.123126 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:47:33.123173 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:33.126425 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:47:33.128938 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:47:33.131210 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:47:33.551613 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:47:33.556361 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 30 13:47:33.567106 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:47:33.573512 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:47:33.589002 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:33.589053 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:33.589083 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:33.609998 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:33.624987 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:33.625016 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:47:33.634410 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:47:33.643221 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:47:33.676168 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:33.684215 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:33.705163 systemd-networkd[870]: lo: Link UP Jan 30 13:47:33.705173 systemd-networkd[870]: lo: Gained carrier Jan 30 13:47:33.707457 systemd-networkd[870]: Enumeration completed Jan 30 13:47:33.707692 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:33.710911 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:33.710917 systemd-networkd[870]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:33.714965 systemd[1]: Reached target network.target - Network. Jan 30 13:47:33.778973 kernel: mlx5_core 7164:00:02.0 enP29028s1: Link up Jan 30 13:47:33.817985 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: Data path switched to VF: enP29028s1 Jan 30 13:47:33.818779 systemd-networkd[870]: enP29028s1: Link UP Jan 30 13:47:33.818934 systemd-networkd[870]: eth0: Link UP Jan 30 13:47:33.819166 systemd-networkd[870]: eth0: Gained carrier Jan 30 13:47:33.819181 systemd-networkd[870]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:33.824167 systemd-networkd[870]: enP29028s1: Gained carrier Jan 30 13:47:33.851067 systemd-networkd[870]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:47:34.355616 ignition[820]: Ignition 2.19.0 Jan 30 13:47:34.355630 ignition[820]: Stage: fetch-offline Jan 30 13:47:34.355688 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.355701 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.355822 ignition[820]: parsed url from cmdline: "" Jan 30 13:47:34.355827 ignition[820]: no config URL provided Jan 30 13:47:34.355833 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:34.355844 ignition[820]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:34.355851 ignition[820]: failed to fetch config: resource requires networking Jan 30 13:47:34.357300 ignition[820]: Ignition finished successfully Jan 30 13:47:34.373750 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:34.383238 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 30 13:47:34.397152 ignition[878]: Ignition 2.19.0 Jan 30 13:47:34.397162 ignition[878]: Stage: fetch Jan 30 13:47:34.397380 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.397394 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.397485 ignition[878]: parsed url from cmdline: "" Jan 30 13:47:34.397489 ignition[878]: no config URL provided Jan 30 13:47:34.397496 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:47:34.397503 ignition[878]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:47:34.397527 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 30 13:47:34.486716 ignition[878]: GET result: OK Jan 30 13:47:34.486820 ignition[878]: config has been read from IMDS userdata Jan 30 13:47:34.486859 ignition[878]: parsing config with SHA512: 8962fcfeb70bd7875d3dfcfbf254a2677f81589fe1a85e667fe3c550a1431f4fe79e2f68fcf6be695053ff54a71a7db563ad0ae83d66a6208079830b3436dfce Jan 30 13:47:34.495832 unknown[878]: fetched base config from "system" Jan 30 13:47:34.495867 unknown[878]: fetched base config from "system" Jan 30 13:47:34.497801 ignition[878]: fetch: fetch complete Jan 30 13:47:34.495880 unknown[878]: fetched user config from "azure" Jan 30 13:47:34.497810 ignition[878]: fetch: fetch passed Jan 30 13:47:34.497875 ignition[878]: Ignition finished successfully Jan 30 13:47:34.508025 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 30 13:47:34.516116 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:47:34.530747 ignition[885]: Ignition 2.19.0 Jan 30 13:47:34.530758 ignition[885]: Stage: kargs Jan 30 13:47:34.530978 ignition[885]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.533751 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:47:34.530991 ignition[885]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.531903 ignition[885]: kargs: kargs passed Jan 30 13:47:34.531961 ignition[885]: Ignition finished successfully Jan 30 13:47:34.545153 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:47:34.563282 ignition[891]: Ignition 2.19.0 Jan 30 13:47:34.563293 ignition[891]: Stage: disks Jan 30 13:47:34.565188 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:47:34.563517 ignition[891]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:34.569211 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:34.563529 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:34.573376 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:34.564386 ignition[891]: disks: disks passed Jan 30 13:47:34.576205 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:34.564428 ignition[891]: Ignition finished successfully Jan 30 13:47:34.582378 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:34.597128 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:34.610105 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:47:34.664303 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 30 13:47:34.669383 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:47:34.683085 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:47:34.775965 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:47:34.776215 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:47:34.778246 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:47:34.819053 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:34.823625 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:47:34.839100 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (910) Jan 30 13:47:34.839162 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:34.840139 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 30 13:47:34.853266 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:34.853294 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:34.853310 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:34.853028 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:47:34.853069 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:34.862237 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:34.867143 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:47:34.879135 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:47:34.911163 systemd-networkd[870]: eth0: Gained IPv6LL Jan 30 13:47:35.295194 systemd-networkd[870]: enP29028s1: Gained IPv6LL Jan 30 13:47:35.449252 coreos-metadata[912]: Jan 30 13:47:35.449 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:47:35.453264 coreos-metadata[912]: Jan 30 13:47:35.451 INFO Fetch successful Jan 30 13:47:35.453264 coreos-metadata[912]: Jan 30 13:47:35.451 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:47:35.464648 coreos-metadata[912]: Jan 30 13:47:35.464 INFO Fetch successful Jan 30 13:47:35.481338 coreos-metadata[912]: Jan 30 13:47:35.481 INFO wrote hostname ci-4081.3.0-a-95297e853e to /sysroot/etc/hostname Jan 30 13:47:35.483235 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:47:35.567805 initrd-setup-root[939]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:47:35.604662 initrd-setup-root[946]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:47:35.625204 initrd-setup-root[953]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:47:35.644134 initrd-setup-root[960]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:47:36.574439 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:36.582067 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:47:36.589119 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:47:36.600539 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:47:36.605558 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:36.625118 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:47:36.633939 ignition[1028]: INFO : Ignition 2.19.0 Jan 30 13:47:36.633939 ignition[1028]: INFO : Stage: mount Jan 30 13:47:36.640177 ignition[1028]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:36.640177 ignition[1028]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:36.640177 ignition[1028]: INFO : mount: mount passed Jan 30 13:47:36.640177 ignition[1028]: INFO : Ignition finished successfully Jan 30 13:47:36.635997 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:47:36.654294 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:47:36.663167 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:47:36.676965 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1040) Jan 30 13:47:36.676997 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:47:36.680965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:47:36.685078 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:47:36.689963 kernel: BTRFS info (device sda6): auto enabling async discard Jan 30 13:47:36.691662 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:47:36.716496 ignition[1057]: INFO : Ignition 2.19.0 Jan 30 13:47:36.716496 ignition[1057]: INFO : Stage: files Jan 30 13:47:36.720346 ignition[1057]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:36.720346 ignition[1057]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:36.720346 ignition[1057]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:47:36.746173 ignition[1057]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:47:36.746173 ignition[1057]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:47:36.856561 ignition[1057]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:47:36.860302 ignition[1057]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:47:36.860302 ignition[1057]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:47:36.857096 unknown[1057]: wrote ssh authorized keys file for user: core Jan 30 13:47:36.899739 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:36.904999 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 30 13:47:36.904999 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:36.904999 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:47:36.959306 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 30 13:47:37.111361 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:47:37.111361 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:37.121691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:47:37.121691 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:37.130139 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:47:37.130139 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:37.138248 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:47:37.138248 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:37.146848 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:47:37.151209 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.155523 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:47:37.678486 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 30 13:47:37.990357 ignition[1057]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:47:37.990357 ignition[1057]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 30 13:47:38.005589 ignition[1057]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:47:38.012979 ignition[1057]: INFO : files: files passed Jan 30 13:47:38.012979 ignition[1057]: INFO : Ignition finished successfully Jan 30 13:47:38.007726 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:47:38.032570 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:47:38.057371 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:47:38.060339 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:47:38.060433 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:47:38.080450 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:38.080450 initrd-setup-root-after-ignition[1085]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:38.087544 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:47:38.091826 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:38.094836 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:47:38.106102 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:47:38.135850 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:47:38.135999 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:47:38.140877 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:47:38.148822 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:47:38.153536 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:47:38.165177 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:47:38.178686 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:38.187240 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:47:38.198489 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:38.203499 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:38.206261 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:47:38.212851 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:47:38.213036 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:47:38.220075 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:47:38.224898 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:47:38.228737 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:47:38.231076 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:47:38.238050 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:47:38.240719 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:47:38.245120 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:47:38.247868 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:47:38.252205 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:47:38.261158 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:47:38.264498 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:47:38.264644 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:47:38.271619 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:38.276258 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:38.278838 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:47:38.283372 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:38.286039 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:47:38.286161 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:47:38.295894 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:47:38.296092 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:47:38.301434 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:47:38.301582 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:47:38.310422 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 30 13:47:38.310558 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 30 13:47:38.323154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:47:38.330197 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:47:38.335066 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:47:38.335258 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:38.341490 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:47:38.349534 ignition[1109]: INFO : Ignition 2.19.0 Jan 30 13:47:38.349534 ignition[1109]: INFO : Stage: umount Jan 30 13:47:38.349534 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:47:38.349534 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 30 13:47:38.349534 ignition[1109]: INFO : umount: umount passed Jan 30 13:47:38.349534 ignition[1109]: INFO : Ignition finished successfully Jan 30 13:47:38.341660 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:47:38.353635 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:47:38.353740 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:47:38.360075 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:47:38.360369 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:47:38.360935 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:47:38.361276 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:47:38.361617 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 30 13:47:38.361711 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 30 13:47:38.362004 systemd[1]: Stopped target network.target - Network. Jan 30 13:47:38.362392 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:47:38.362484 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:47:38.362873 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:47:38.365481 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:47:38.389005 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:38.393021 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:47:38.394986 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:47:38.402118 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:47:38.402174 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:47:38.404959 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:47:38.405006 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:47:38.408631 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:47:38.408692 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:47:38.411484 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:47:38.411531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:47:38.411957 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:47:38.412216 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:47:38.412858 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:47:38.413466 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:47:38.438121 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:47:38.442014 systemd-networkd[870]: eth0: DHCPv6 lease lost Jan 30 13:47:38.445187 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:47:38.445284 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:47:38.450820 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:47:38.450941 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:47:38.469728 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:47:38.469852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:47:38.481727 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:47:38.481804 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:38.487246 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:47:38.487301 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:47:38.516087 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:47:38.518067 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:47:38.518138 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:47:38.525600 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:47:38.525655 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:38.530799 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:47:38.530855 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:38.535404 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:47:38.537679 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:38.549741 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:38.570574 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:47:38.570743 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:38.576807 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:47:38.576854 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:38.583116 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:47:38.585789 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:38.591896 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:47:38.591970 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:47:38.596167 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:47:38.596213 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:47:38.600976 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:47:38.601022 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:47:38.616142 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:47:38.621794 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: Data path switched from VF: enP29028s1 Jan 30 13:47:38.621915 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:47:38.621988 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:38.627069 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:47:38.627128 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:38.638433 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:47:38.638496 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:38.643535 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:38.643582 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:38.649065 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:47:38.649434 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:47:38.653929 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:47:38.654096 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:47:38.658870 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:47:38.672190 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:47:38.783576 systemd[1]: Switching root. Jan 30 13:47:38.815920 systemd-journald[176]: Journal stopped Jan 30 13:47:45.233620 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 30 13:47:45.233740 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:47:45.233760 kernel: SELinux: policy capability open_perms=1 Jan 30 13:47:45.233768 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:47:45.233776 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:47:45.236456 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:47:45.236489 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:47:45.236507 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:47:45.236518 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:47:45.236530 kernel: audit: type=1403 audit(1738244861.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:47:45.236541 systemd[1]: Successfully loaded SELinux policy in 117.260ms. Jan 30 13:47:45.236553 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.909ms. Jan 30 13:47:45.236566 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:47:45.236577 systemd[1]: Detected virtualization microsoft. Jan 30 13:47:45.236592 systemd[1]: Detected architecture x86-64. Jan 30 13:47:45.236603 systemd[1]: Detected first boot. Jan 30 13:47:45.236615 systemd[1]: Hostname set to . Jan 30 13:47:45.236626 systemd[1]: Initializing machine ID from random generator. Jan 30 13:47:45.236637 zram_generator::config[1169]: No configuration found. Jan 30 13:47:45.236652 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:47:45.236662 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:47:45.236674 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:47:45.236686 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:47:45.236697 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:47:45.236709 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:47:45.236719 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:47:45.236734 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:47:45.236746 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:47:45.236757 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:47:45.236769 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:47:45.236779 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:47:45.236792 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:47:45.236802 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:47:45.236816 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:47:45.236829 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:47:45.236842 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:47:45.236852 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:47:45.236864 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:47:45.236877 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:47:45.236887 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:47:45.236902 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:47:45.236915 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:47:45.236928 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:47:45.236940 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:47:45.236970 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:47:45.236983 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:47:45.236994 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:47:45.237006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:47:45.237019 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:47:45.237032 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:47:45.237044 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:47:45.237056 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:47:45.237068 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:47:45.237082 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:47:45.237096 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:45.237108 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:47:45.237121 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:47:45.237131 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:47:45.237145 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:47:45.237158 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:45.237169 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:47:45.237183 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:47:45.237198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:45.237210 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:47:45.237222 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:45.237234 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:47:45.237246 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:45.237259 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:47:45.237269 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 30 13:47:45.237283 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 30 13:47:45.237298 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:47:45.237308 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:47:45.237321 kernel: fuse: init (API version 7.39) Jan 30 13:47:45.237331 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:47:45.237344 kernel: loop: module loaded Jan 30 13:47:45.237353 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:47:45.237387 systemd-journald[1289]: Collecting audit messages is disabled. Jan 30 13:47:45.237415 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:47:45.237427 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:45.237442 systemd-journald[1289]: Journal started Jan 30 13:47:45.237466 systemd-journald[1289]: Runtime Journal (/run/log/journal/5ae3f10f324d4b0abf94c246a1a8523d) is 8.0M, max 158.8M, 150.8M free. Jan 30 13:47:45.249649 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:47:45.252315 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:47:45.257473 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:47:45.260257 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:47:45.262694 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:47:45.265389 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:47:45.268349 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:47:45.270806 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:47:45.274598 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:47:45.277778 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:47:45.278010 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:47:45.281803 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:45.282397 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:45.285924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:45.286222 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:45.289619 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:47:45.289917 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:47:45.292814 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:45.293095 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:45.296053 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:47:45.299167 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:47:45.302442 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:47:45.310971 kernel: ACPI: bus type drm_connector registered Jan 30 13:47:45.311481 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:47:45.311737 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:47:45.322309 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:47:45.329034 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:47:45.334050 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:47:45.337094 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:47:45.355200 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:47:45.363359 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:47:45.367199 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:47:45.370123 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:47:45.372914 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:47:45.383506 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:47:45.393117 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:47:45.401432 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:47:45.407151 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:47:45.433973 systemd-journald[1289]: Time spent on flushing to /var/log/journal/5ae3f10f324d4b0abf94c246a1a8523d is 27.126ms for 949 entries. Jan 30 13:47:45.433973 systemd-journald[1289]: System Journal (/var/log/journal/5ae3f10f324d4b0abf94c246a1a8523d) is 8.0M, max 2.6G, 2.6G free. Jan 30 13:47:45.479859 systemd-journald[1289]: Received client request to flush runtime journal. Jan 30 13:47:45.423430 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:47:45.428832 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:47:45.459795 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:47:45.472155 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:47:45.481663 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:47:45.499045 udevadm[1337]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:47:45.521685 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:47:45.577718 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 30 13:47:45.577742 systemd-tmpfiles[1328]: ACLs are not supported, ignoring. Jan 30 13:47:45.585005 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:47:45.595100 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:47:45.833194 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:47:45.844255 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:47:45.860533 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Jan 30 13:47:45.860558 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Jan 30 13:47:45.864824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:47:46.579918 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:47:46.588135 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:47:46.613098 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Jan 30 13:47:47.158019 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:47:47.170114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:47:47.249669 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:47:47.254587 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 30 13:47:47.328913 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:47:47.349968 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:47:47.384789 kernel: hv_vmbus: registering driver hv_balloon Jan 30 13:47:47.384864 kernel: hv_vmbus: registering driver hyperv_fb Jan 30 13:47:47.387967 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 30 13:47:47.399970 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 30 13:47:47.406003 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 30 13:47:47.411820 kernel: Console: switching to colour dummy device 80x25 Jan 30 13:47:47.417957 kernel: Console: switching to colour frame buffer device 128x48 Jan 30 13:47:47.499197 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:47.560032 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:47.564109 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:47.580268 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:47.611934 systemd-networkd[1361]: lo: Link UP Jan 30 13:47:47.612978 systemd-networkd[1361]: lo: Gained carrier Jan 30 13:47:47.618160 systemd-networkd[1361]: Enumeration completed Jan 30 13:47:47.618583 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:47.618588 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:47.619072 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:47:47.629284 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:47:47.646124 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:47:47.646464 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:47.703960 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1372) Jan 30 13:47:47.723754 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:47:47.749968 kernel: mlx5_core 7164:00:02.0 enP29028s1: Link up Jan 30 13:47:47.776386 kernel: hv_netvsc 000d3ab4-9355-000d-3ab4-9355000d3ab4 eth0: Data path switched to VF: enP29028s1 Jan 30 13:47:47.781776 systemd-networkd[1361]: enP29028s1: Link UP Jan 30 13:47:47.782791 systemd-networkd[1361]: eth0: Link UP Jan 30 13:47:47.782799 systemd-networkd[1361]: eth0: Gained carrier Jan 30 13:47:47.782823 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:47.798518 systemd-networkd[1361]: enP29028s1: Gained carrier Jan 30 13:47:47.827974 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 30 13:47:47.835097 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:47:47.886935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 30 13:47:47.949595 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:47:47.958123 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:47:48.042619 lvm[1450]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:47:48.072086 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:47:48.075881 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:47:48.082192 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:47:48.086621 lvm[1453]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:47:48.114092 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:47:48.117740 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:47:48.118542 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:47:48.118568 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:47:48.119066 systemd[1]: Reached target machines.target - Containers. Jan 30 13:47:48.120555 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:47:48.131204 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:47:48.134270 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:47:48.135687 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:48.139102 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:47:48.141942 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:47:48.151126 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:47:48.165517 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:47:48.206989 kernel: loop0: detected capacity change from 0 to 142488 Jan 30 13:47:48.223673 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:47:48.260981 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:47:48.262043 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:47:48.623488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:47:48.700997 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:47:48.745977 kernel: loop1: detected capacity change from 0 to 140768 Jan 30 13:47:49.164970 kernel: loop2: detected capacity change from 0 to 210664 Jan 30 13:47:49.219975 kernel: loop3: detected capacity change from 0 to 31056 Jan 30 13:47:49.439203 systemd-networkd[1361]: enP29028s1: Gained IPv6LL Jan 30 13:47:49.631246 systemd-networkd[1361]: eth0: Gained IPv6LL Jan 30 13:47:49.638094 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:47:49.755972 kernel: loop4: detected capacity change from 0 to 142488 Jan 30 13:47:49.768975 kernel: loop5: detected capacity change from 0 to 140768 Jan 30 13:47:49.780966 kernel: loop6: detected capacity change from 0 to 210664 Jan 30 13:47:49.788966 kernel: loop7: detected capacity change from 0 to 31056 Jan 30 13:47:49.792880 (sd-merge)[1482]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 30 13:47:49.793483 (sd-merge)[1482]: Merged extensions into '/usr'. Jan 30 13:47:49.798012 systemd[1]: Reloading requested from client PID 1460 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:47:49.798030 systemd[1]: Reloading... Jan 30 13:47:49.852014 zram_generator::config[1506]: No configuration found. Jan 30 13:47:50.033460 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:50.118406 systemd[1]: Reloading finished in 319 ms. Jan 30 13:47:50.134697 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:47:50.144134 systemd[1]: Starting ensure-sysext.service... Jan 30 13:47:50.148134 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:47:50.155014 systemd[1]: Reloading requested from client PID 1573 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:47:50.155035 systemd[1]: Reloading... Jan 30 13:47:50.187584 systemd-tmpfiles[1574]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:47:50.190147 systemd-tmpfiles[1574]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:47:50.191889 systemd-tmpfiles[1574]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:47:50.193538 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Jan 30 13:47:50.193638 systemd-tmpfiles[1574]: ACLs are not supported, ignoring. Jan 30 13:47:50.213582 systemd-tmpfiles[1574]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:47:50.213792 systemd-tmpfiles[1574]: Skipping /boot Jan 30 13:47:50.235732 systemd-tmpfiles[1574]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:47:50.237988 systemd-tmpfiles[1574]: Skipping /boot Jan 30 13:47:50.247080 zram_generator::config[1605]: No configuration found. Jan 30 13:47:50.389238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:50.466849 systemd[1]: Reloading finished in 311 ms. Jan 30 13:47:50.483532 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:47:50.500115 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:47:50.506113 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:47:50.514446 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:47:50.521117 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:47:50.530128 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:47:50.539886 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:50.541627 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:50.553321 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:50.566269 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:50.579337 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:50.586430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:50.586611 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:50.588402 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:50.588630 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:50.591987 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:50.592215 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:50.595831 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:50.596327 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:50.611560 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:47:50.628941 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:50.629295 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 30 13:47:50.635333 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:47:50.641459 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:47:50.652438 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:47:50.667735 systemd-resolved[1674]: Positive Trust Anchors: Jan 30 13:47:50.667754 systemd-resolved[1674]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:47:50.667800 systemd-resolved[1674]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:47:50.670853 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:47:50.674601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:47:50.674974 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:47:50.681164 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:47:50.682996 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:47:50.683248 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:47:50.686528 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:47:50.686705 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:47:50.689744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:47:50.689965 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:47:50.693221 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:47:50.693463 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:47:50.698154 systemd[1]: Finished ensure-sysext.service. Jan 30 13:47:50.706088 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:47:50.706162 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:47:50.739863 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:47:50.740984 systemd-resolved[1674]: Using system hostname 'ci-4081.3.0-a-95297e853e'. Jan 30 13:47:50.743228 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:47:50.746760 systemd[1]: Reached target network.target - Network. Jan 30 13:47:50.749605 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:47:50.752409 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:47:50.775848 augenrules[1719]: No rules Jan 30 13:47:50.776774 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:47:51.795677 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:47:51.799600 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:47:54.314256 ldconfig[1457]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:47:54.327170 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:47:54.343161 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:47:54.405684 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:47:54.409363 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:47:54.411834 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:47:54.414522 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:47:54.417431 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:47:54.420036 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:47:54.422731 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:47:54.425432 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:47:54.425475 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:47:54.427567 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:47:54.430509 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:47:54.434261 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:47:54.455760 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:47:54.458634 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:47:54.461349 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:47:54.463534 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:47:54.465690 systemd[1]: System is tainted: cgroupsv1 Jan 30 13:47:54.465752 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:47:54.465784 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:47:54.490174 systemd[1]: Starting chronyd.service - NTP client/server... Jan 30 13:47:54.495061 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:47:54.501255 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 30 13:47:54.509375 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:47:54.515049 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:47:54.526125 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:47:54.527903 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:47:54.529018 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 30 13:47:54.537164 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 30 13:47:54.542662 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 30 13:47:54.545052 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:47:54.552125 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:47:54.567110 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:47:54.577739 KVP[1744]: KVP starting; pid is:1744 Jan 30 13:47:54.580658 (chronyd)[1735]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 30 13:47:54.589153 jq[1740]: false Jan 30 13:47:54.589568 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:47:54.605725 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:47:54.607311 KVP[1744]: KVP LIC Version: 3.1 Jan 30 13:47:54.607962 kernel: hv_utils: KVP IC version 4.0 Jan 30 13:47:54.613025 chronyd[1757]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 30 13:47:54.621116 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:47:54.624613 extend-filesystems[1743]: Found loop4 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found loop5 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found loop6 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found loop7 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda1 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda2 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda3 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found usr Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda4 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda6 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda7 Jan 30 13:47:54.624613 extend-filesystems[1743]: Found sda9 Jan 30 13:47:54.624613 extend-filesystems[1743]: Checking size of /dev/sda9 Jan 30 13:47:54.645241 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:47:54.674931 chronyd[1757]: Timezone right/UTC failed leap second check, ignoring Jan 30 13:47:54.650720 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:47:54.675196 chronyd[1757]: Loaded seccomp filter (level 2) Jan 30 13:47:54.655251 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:47:54.670048 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:47:54.683689 systemd[1]: Started chronyd.service - NTP client/server. Jan 30 13:47:54.728806 jq[1769]: true Jan 30 13:47:54.697526 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:47:54.697853 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:47:54.711022 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:47:54.711321 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:47:54.727482 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:47:54.727797 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:47:54.751192 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:47:54.764022 extend-filesystems[1743]: Old size kept for /dev/sda9 Jan 30 13:47:54.764022 extend-filesystems[1743]: Found sr0 Jan 30 13:47:54.763022 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:47:54.771755 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:47:54.783340 (ntainerd)[1785]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:47:54.800637 update_engine[1768]: I20250130 13:47:54.795074 1768 main.cc:92] Flatcar Update Engine starting Jan 30 13:47:54.801790 systemd-logind[1763]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard) Jan 30 13:47:54.803581 systemd-logind[1763]: New seat seat0. Jan 30 13:47:54.804439 jq[1781]: true Jan 30 13:47:54.814307 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:47:54.830507 dbus-daemon[1738]: [system] SELinux support is enabled Jan 30 13:47:54.830739 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:47:54.837669 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:47:54.837700 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:47:54.843312 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:47:54.843354 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:47:54.865550 update_engine[1768]: I20250130 13:47:54.865486 1768 update_check_scheduler.cc:74] Next update check in 3m49s Jan 30 13:47:54.868941 dbus-daemon[1738]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 30 13:47:54.869072 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:47:54.872914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:47:54.881151 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:47:54.901041 tar[1779]: linux-amd64/helm Jan 30 13:47:54.960849 coreos-metadata[1737]: Jan 30 13:47:54.960 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 30 13:47:54.963670 coreos-metadata[1737]: Jan 30 13:47:54.963 INFO Fetch successful Jan 30 13:47:54.964095 coreos-metadata[1737]: Jan 30 13:47:54.964 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 30 13:47:54.968703 coreos-metadata[1737]: Jan 30 13:47:54.968 INFO Fetch successful Jan 30 13:47:54.970858 coreos-metadata[1737]: Jan 30 13:47:54.970 INFO Fetching http://168.63.129.16/machine/783a599b-24ba-4a0e-a6bf-ebdf11f12431/d1dc9471%2D19a9%2D41bc%2D8dcf%2Dc05fcc46d1ca.%5Fci%2D4081.3.0%2Da%2D95297e853e?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 30 13:47:54.972992 coreos-metadata[1737]: Jan 30 13:47:54.972 INFO Fetch successful Jan 30 13:47:54.974399 coreos-metadata[1737]: Jan 30 13:47:54.974 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 30 13:47:54.990969 coreos-metadata[1737]: Jan 30 13:47:54.989 INFO Fetch successful Jan 30 13:47:55.041805 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 30 13:47:55.053490 bash[1830]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:47:55.064749 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1813) Jan 30 13:47:55.058186 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:47:55.075581 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:47:55.083220 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:47:55.218631 locksmithd[1820]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:47:55.304511 sshd_keygen[1788]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:47:55.343849 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:47:55.357405 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:47:55.364052 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 30 13:47:55.383496 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:47:55.386263 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:47:55.413296 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:47:55.420190 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 30 13:47:55.448599 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:47:55.464313 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:47:55.475517 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:47:55.479683 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:47:55.814941 tar[1779]: linux-amd64/LICENSE Jan 30 13:47:55.814941 tar[1779]: linux-amd64/README.md Jan 30 13:47:55.834549 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:47:55.854261 containerd[1785]: time="2025-01-30T13:47:55.854152600Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:47:55.886911 containerd[1785]: time="2025-01-30T13:47:55.886860500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.889038 containerd[1785]: time="2025-01-30T13:47:55.888989300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:55.889186 containerd[1785]: time="2025-01-30T13:47:55.889169200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:47:55.889264 containerd[1785]: time="2025-01-30T13:47:55.889250800Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:47:55.889483 containerd[1785]: time="2025-01-30T13:47:55.889465200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:47:55.889574 containerd[1785]: time="2025-01-30T13:47:55.889559500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.889716 containerd[1785]: time="2025-01-30T13:47:55.889694800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:55.890090 containerd[1785]: time="2025-01-30T13:47:55.890057300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.890570 containerd[1785]: time="2025-01-30T13:47:55.890542000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:55.890676 containerd[1785]: time="2025-01-30T13:47:55.890654400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.890764 containerd[1785]: time="2025-01-30T13:47:55.890741100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:55.890823 containerd[1785]: time="2025-01-30T13:47:55.890811800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.891024 containerd[1785]: time="2025-01-30T13:47:55.891004400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.893163 containerd[1785]: time="2025-01-30T13:47:55.892602800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:47:55.893163 containerd[1785]: time="2025-01-30T13:47:55.892815500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:47:55.893163 containerd[1785]: time="2025-01-30T13:47:55.892837100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:47:55.893163 containerd[1785]: time="2025-01-30T13:47:55.892926300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:47:55.893163 containerd[1785]: time="2025-01-30T13:47:55.893022500Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:47:55.904187 containerd[1785]: time="2025-01-30T13:47:55.904155800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:47:55.904332 containerd[1785]: time="2025-01-30T13:47:55.904315200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:47:55.904484 containerd[1785]: time="2025-01-30T13:47:55.904465800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:47:55.904567 containerd[1785]: time="2025-01-30T13:47:55.904552500Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:47:55.904615 containerd[1785]: time="2025-01-30T13:47:55.904594700Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:47:55.904763 containerd[1785]: time="2025-01-30T13:47:55.904737400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905171100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905303300Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905325400Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905344400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905368200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905387600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905406700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905426600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905455400Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905473800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905490800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905507100Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905541900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906038 containerd[1785]: time="2025-01-30T13:47:55.905561300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905577700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905595500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905612400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905631400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905650300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905675500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905694700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905713900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905729900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905746500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905763400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905791300Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905820600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905839100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.906519 containerd[1785]: time="2025-01-30T13:47:55.905853600Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.905906900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.905929800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.905959500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.905978900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.905993300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.906016400Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.906031700Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:47:55.907063 containerd[1785]: time="2025-01-30T13:47:55.906046500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:47:55.907325 containerd[1785]: time="2025-01-30T13:47:55.906442000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:47:55.907325 containerd[1785]: time="2025-01-30T13:47:55.906522400Z" level=info msg="Connect containerd service" Jan 30 13:47:55.907325 containerd[1785]: time="2025-01-30T13:47:55.906582200Z" level=info msg="using legacy CRI server" Jan 30 13:47:55.907325 containerd[1785]: time="2025-01-30T13:47:55.906592200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:47:55.907325 containerd[1785]: time="2025-01-30T13:47:55.906722600Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:47:55.907675 containerd[1785]: time="2025-01-30T13:47:55.907464800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:47:55.909188 containerd[1785]: time="2025-01-30T13:47:55.909165100Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:47:55.909550 containerd[1785]: time="2025-01-30T13:47:55.909523800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:47:55.909608 containerd[1785]: time="2025-01-30T13:47:55.909451900Z" level=info msg="Start subscribing containerd event" Jan 30 13:47:55.909608 containerd[1785]: time="2025-01-30T13:47:55.909595700Z" level=info msg="Start recovering state" Jan 30 13:47:55.909692 containerd[1785]: time="2025-01-30T13:47:55.909677500Z" level=info msg="Start event monitor" Jan 30 13:47:55.909729 containerd[1785]: time="2025-01-30T13:47:55.909701000Z" level=info msg="Start snapshots syncer" Jan 30 13:47:55.909729 containerd[1785]: time="2025-01-30T13:47:55.909721600Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:47:55.909789 containerd[1785]: time="2025-01-30T13:47:55.909733200Z" level=info msg="Start streaming server" Jan 30 13:47:55.909969 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:47:55.911644 containerd[1785]: time="2025-01-30T13:47:55.911625900Z" level=info msg="containerd successfully booted in 0.058719s" Jan 30 13:47:56.343147 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:47:56.353041 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:47:56.356295 systemd[1]: Startup finished in 850ms (firmware) + 29.668s (loader) + 12.710s (kernel) + 15.096s (userspace) = 58.325s. Jan 30 13:47:56.358464 (kubelet)[1922]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:47:56.844696 login[1900]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Jan 30 13:47:56.862838 login[1901]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:47:56.879760 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:47:56.890426 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:47:56.893659 systemd-logind[1763]: New session 1 of user core. Jan 30 13:47:56.926170 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:47:56.943143 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:47:56.948489 (systemd)[1937]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:47:57.074926 kubelet[1922]: E0130 13:47:57.074823 1922 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:47:57.077513 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:47:57.078607 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:47:57.253149 systemd[1937]: Queued start job for default target default.target. Jan 30 13:47:57.253702 systemd[1937]: Created slice app.slice - User Application Slice. Jan 30 13:47:57.253732 systemd[1937]: Reached target paths.target - Paths. Jan 30 13:47:57.253750 systemd[1937]: Reached target timers.target - Timers. Jan 30 13:47:57.261122 systemd[1937]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:47:57.267819 systemd[1937]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:47:57.268098 systemd[1937]: Reached target sockets.target - Sockets. Jan 30 13:47:57.268123 systemd[1937]: Reached target basic.target - Basic System. Jan 30 13:47:57.268174 systemd[1937]: Reached target default.target - Main User Target. Jan 30 13:47:57.268215 systemd[1937]: Startup finished in 307ms. Jan 30 13:47:57.268678 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:47:57.279864 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:47:57.308411 waagent[1896]: 2025-01-30T13:47:57.308308Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.309615Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.310326Z INFO Daemon Daemon Python: 3.11.9 Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.311249Z INFO Daemon Daemon Run daemon Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.311907Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.312595Z INFO Daemon Daemon Using waagent for provisioning Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.313477Z INFO Daemon Daemon Activate resource disk Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.314086Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.319074Z INFO Daemon Daemon Found device: None Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.319652Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.320390Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.322792Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:47:57.339199 waagent[1896]: 2025-01-30T13:47:57.323413Z INFO Daemon Daemon Running default provisioning handler Jan 30 13:47:57.340896 waagent[1896]: 2025-01-30T13:47:57.340820Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 30 13:47:57.347147 waagent[1896]: 2025-01-30T13:47:57.347092Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 30 13:47:57.354833 waagent[1896]: 2025-01-30T13:47:57.348458Z INFO Daemon Daemon cloud-init is enabled: False Jan 30 13:47:57.354833 waagent[1896]: 2025-01-30T13:47:57.348723Z INFO Daemon Daemon Copying ovf-env.xml Jan 30 13:47:57.504333 waagent[1896]: 2025-01-30T13:47:57.502037Z INFO Daemon Daemon Successfully mounted dvd Jan 30 13:47:57.518043 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 30 13:47:57.519759 waagent[1896]: 2025-01-30T13:47:57.519557Z INFO Daemon Daemon Detect protocol endpoint Jan 30 13:47:57.533035 waagent[1896]: 2025-01-30T13:47:57.520694Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 30 13:47:57.533035 waagent[1896]: 2025-01-30T13:47:57.521544Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 30 13:47:57.533035 waagent[1896]: 2025-01-30T13:47:57.522299Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 30 13:47:57.533035 waagent[1896]: 2025-01-30T13:47:57.522825Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 30 13:47:57.533035 waagent[1896]: 2025-01-30T13:47:57.523745Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 30 13:47:57.555246 waagent[1896]: 2025-01-30T13:47:57.555187Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 30 13:47:57.563293 waagent[1896]: 2025-01-30T13:47:57.556753Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 30 13:47:57.563293 waagent[1896]: 2025-01-30T13:47:57.557591Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 30 13:47:57.610989 waagent[1896]: 2025-01-30T13:47:57.610861Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 30 13:47:57.614188 waagent[1896]: 2025-01-30T13:47:57.614112Z INFO Daemon Daemon Forcing an update of the goal state. Jan 30 13:47:57.619656 waagent[1896]: 2025-01-30T13:47:57.619600Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:47:57.651987 waagent[1896]: 2025-01-30T13:47:57.651899Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 30 13:47:57.667262 waagent[1896]: 2025-01-30T13:47:57.654340Z INFO Daemon Jan 30 13:47:57.667262 waagent[1896]: 2025-01-30T13:47:57.655602Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: ce202fd0-1d7f-49cf-b69a-2d37e7c1e5b3 eTag: 6968116161621836177 source: Fabric] Jan 30 13:47:57.667262 waagent[1896]: 2025-01-30T13:47:57.657078Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 30 13:47:57.667262 waagent[1896]: 2025-01-30T13:47:57.658115Z INFO Daemon Jan 30 13:47:57.667262 waagent[1896]: 2025-01-30T13:47:57.658839Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:47:57.670059 waagent[1896]: 2025-01-30T13:47:57.670014Z INFO Daemon Daemon Downloading artifacts profile blob Jan 30 13:47:57.741214 waagent[1896]: 2025-01-30T13:47:57.741124Z INFO Daemon Downloaded certificate {'thumbprint': '945688AA4DE1C0846AEF162B183ADF235B13F418', 'hasPrivateKey': False} Jan 30 13:47:57.750814 waagent[1896]: 2025-01-30T13:47:57.742524Z INFO Daemon Downloaded certificate {'thumbprint': '2858559ACF3D52BA599D8EB3715A2841F80AD620', 'hasPrivateKey': True} Jan 30 13:47:57.750814 waagent[1896]: 2025-01-30T13:47:57.743682Z INFO Daemon Fetch goal state completed Jan 30 13:47:57.757645 waagent[1896]: 2025-01-30T13:47:57.757503Z INFO Daemon Daemon Starting provisioning Jan 30 13:47:57.764062 waagent[1896]: 2025-01-30T13:47:57.758575Z INFO Daemon Daemon Handle ovf-env.xml. Jan 30 13:47:57.764062 waagent[1896]: 2025-01-30T13:47:57.759369Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-95297e853e] Jan 30 13:47:57.783152 waagent[1896]: 2025-01-30T13:47:57.783072Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-95297e853e] Jan 30 13:47:57.789866 waagent[1896]: 2025-01-30T13:47:57.784285Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 30 13:47:57.789866 waagent[1896]: 2025-01-30T13:47:57.785065Z INFO Daemon Daemon Primary interface is [eth0] Jan 30 13:47:57.810127 systemd-networkd[1361]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 30 13:47:57.810136 systemd-networkd[1361]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 30 13:47:57.810194 systemd-networkd[1361]: eth0: DHCP lease lost Jan 30 13:47:57.811565 waagent[1896]: 2025-01-30T13:47:57.811495Z INFO Daemon Daemon Create user account if not exists Jan 30 13:47:57.826553 waagent[1896]: 2025-01-30T13:47:57.812753Z INFO Daemon Daemon User core already exists, skip useradd Jan 30 13:47:57.826553 waagent[1896]: 2025-01-30T13:47:57.813461Z INFO Daemon Daemon Configure sudoer Jan 30 13:47:57.826553 waagent[1896]: 2025-01-30T13:47:57.814477Z INFO Daemon Daemon Configure sshd Jan 30 13:47:57.826553 waagent[1896]: 2025-01-30T13:47:57.815209Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 30 13:47:57.826553 waagent[1896]: 2025-01-30T13:47:57.815807Z INFO Daemon Daemon Deploy ssh public key. Jan 30 13:47:57.827054 systemd-networkd[1361]: eth0: DHCPv6 lease lost Jan 30 13:47:57.845065 login[1900]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:47:57.849206 systemd-logind[1763]: New session 2 of user core. Jan 30 13:47:57.860226 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:47:57.863010 systemd-networkd[1361]: eth0: DHCPv4 address 10.200.8.41/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 30 13:47:58.959614 waagent[1896]: 2025-01-30T13:47:58.959544Z INFO Daemon Daemon Provisioning complete Jan 30 13:47:58.973404 waagent[1896]: 2025-01-30T13:47:58.973344Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 30 13:47:58.979246 waagent[1896]: 2025-01-30T13:47:58.974374Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 30 13:47:58.979246 waagent[1896]: 2025-01-30T13:47:58.975084Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 30 13:47:59.098445 waagent[1997]: 2025-01-30T13:47:59.098339Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 30 13:47:59.098898 waagent[1997]: 2025-01-30T13:47:59.098502Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 30 13:47:59.098898 waagent[1997]: 2025-01-30T13:47:59.098586Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 30 13:47:59.157876 waagent[1997]: 2025-01-30T13:47:59.157773Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 30 13:47:59.158140 waagent[1997]: 2025-01-30T13:47:59.158088Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:47:59.158243 waagent[1997]: 2025-01-30T13:47:59.158203Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:47:59.166785 waagent[1997]: 2025-01-30T13:47:59.166712Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 30 13:47:59.174330 waagent[1997]: 2025-01-30T13:47:59.174281Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 30 13:47:59.174771 waagent[1997]: 2025-01-30T13:47:59.174716Z INFO ExtHandler Jan 30 13:47:59.174862 waagent[1997]: 2025-01-30T13:47:59.174803Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d254ceb0-6570-46af-b478-ae1f238f78ce eTag: 6968116161621836177 source: Fabric] Jan 30 13:47:59.175179 waagent[1997]: 2025-01-30T13:47:59.175127Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 30 13:47:59.175724 waagent[1997]: 2025-01-30T13:47:59.175667Z INFO ExtHandler Jan 30 13:47:59.175788 waagent[1997]: 2025-01-30T13:47:59.175750Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 30 13:47:59.179345 waagent[1997]: 2025-01-30T13:47:59.179304Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 30 13:47:59.253925 waagent[1997]: 2025-01-30T13:47:59.253794Z INFO ExtHandler Downloaded certificate {'thumbprint': '945688AA4DE1C0846AEF162B183ADF235B13F418', 'hasPrivateKey': False} Jan 30 13:47:59.254333 waagent[1997]: 2025-01-30T13:47:59.254280Z INFO ExtHandler Downloaded certificate {'thumbprint': '2858559ACF3D52BA599D8EB3715A2841F80AD620', 'hasPrivateKey': True} Jan 30 13:47:59.254758 waagent[1997]: 2025-01-30T13:47:59.254709Z INFO ExtHandler Fetch goal state completed Jan 30 13:47:59.269288 waagent[1997]: 2025-01-30T13:47:59.269230Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1997 Jan 30 13:47:59.269446 waagent[1997]: 2025-01-30T13:47:59.269398Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 30 13:47:59.271003 waagent[1997]: 2025-01-30T13:47:59.270933Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 30 13:47:59.271376 waagent[1997]: 2025-01-30T13:47:59.271328Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 30 13:47:59.327339 waagent[1997]: 2025-01-30T13:47:59.327283Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 30 13:47:59.327619 waagent[1997]: 2025-01-30T13:47:59.327560Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 30 13:47:59.335694 waagent[1997]: 2025-01-30T13:47:59.335623Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 30 13:47:59.342675 systemd[1]: Reloading requested from client PID 2012 ('systemctl') (unit waagent.service)... Jan 30 13:47:59.342694 systemd[1]: Reloading... Jan 30 13:47:59.412972 zram_generator::config[2042]: No configuration found. Jan 30 13:47:59.549733 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:59.625712 systemd[1]: Reloading finished in 282 ms. Jan 30 13:47:59.651401 waagent[1997]: 2025-01-30T13:47:59.651191Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 30 13:47:59.659124 systemd[1]: Reloading requested from client PID 2108 ('systemctl') (unit waagent.service)... Jan 30 13:47:59.659140 systemd[1]: Reloading... Jan 30 13:47:59.744977 zram_generator::config[2145]: No configuration found. Jan 30 13:47:59.864889 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:47:59.941190 systemd[1]: Reloading finished in 281 ms. Jan 30 13:47:59.967965 waagent[1997]: 2025-01-30T13:47:59.965425Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 30 13:47:59.967965 waagent[1997]: 2025-01-30T13:47:59.965625Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 30 13:48:00.361164 waagent[1997]: 2025-01-30T13:48:00.361052Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 30 13:48:00.362015 waagent[1997]: 2025-01-30T13:48:00.361917Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 30 13:48:00.362993 waagent[1997]: 2025-01-30T13:48:00.362870Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 30 13:48:00.363602 waagent[1997]: 2025-01-30T13:48:00.363539Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 30 13:48:00.363766 waagent[1997]: 2025-01-30T13:48:00.363687Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:48:00.363892 waagent[1997]: 2025-01-30T13:48:00.363832Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 30 13:48:00.364175 waagent[1997]: 2025-01-30T13:48:00.364121Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:48:00.364921 waagent[1997]: 2025-01-30T13:48:00.364866Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 30 13:48:00.365050 waagent[1997]: 2025-01-30T13:48:00.364979Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 30 13:48:00.365114 waagent[1997]: 2025-01-30T13:48:00.365044Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 30 13:48:00.365798 waagent[1997]: 2025-01-30T13:48:00.365736Z INFO EnvHandler ExtHandler Configure routes Jan 30 13:48:00.365922 waagent[1997]: 2025-01-30T13:48:00.365860Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 30 13:48:00.366157 waagent[1997]: 2025-01-30T13:48:00.366100Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 30 13:48:00.366282 waagent[1997]: 2025-01-30T13:48:00.366208Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 30 13:48:00.366628 waagent[1997]: 2025-01-30T13:48:00.366578Z INFO EnvHandler ExtHandler Gateway:None Jan 30 13:48:00.366742 waagent[1997]: 2025-01-30T13:48:00.366692Z INFO EnvHandler ExtHandler Routes:None Jan 30 13:48:00.370628 waagent[1997]: 2025-01-30T13:48:00.370564Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 30 13:48:00.373031 waagent[1997]: 2025-01-30T13:48:00.371221Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 30 13:48:00.373031 waagent[1997]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 30 13:48:00.373031 waagent[1997]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 30 13:48:00.373031 waagent[1997]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 30 13:48:00.373031 waagent[1997]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:48:00.373031 waagent[1997]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:48:00.373031 waagent[1997]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 30 13:48:00.379965 waagent[1997]: 2025-01-30T13:48:00.379337Z INFO ExtHandler ExtHandler Jan 30 13:48:00.379965 waagent[1997]: 2025-01-30T13:48:00.379439Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 9d7ac90c-7f71-4b2c-99ed-8c81a8c52e07 correlation 85eb2a4f-8728-4712-a5c1-a4bf5558716f created: 2025-01-30T13:46:46.601654Z] Jan 30 13:48:00.380085 waagent[1997]: 2025-01-30T13:48:00.379961Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 30 13:48:00.380815 waagent[1997]: 2025-01-30T13:48:00.380759Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Jan 30 13:48:00.421610 waagent[1997]: 2025-01-30T13:48:00.421543Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 83110702-4262-493B-ABD4-9772E7D20803;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 30 13:48:00.439477 waagent[1997]: 2025-01-30T13:48:00.439405Z INFO MonitorHandler ExtHandler Network interfaces: Jan 30 13:48:00.439477 waagent[1997]: Executing ['ip', '-a', '-o', 'link']: Jan 30 13:48:00.439477 waagent[1997]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 30 13:48:00.439477 waagent[1997]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b4:93:55 brd ff:ff:ff:ff:ff:ff Jan 30 13:48:00.439477 waagent[1997]: 3: enP29028s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b4:93:55 brd ff:ff:ff:ff:ff:ff\ altname enP29028p0s2 Jan 30 13:48:00.439477 waagent[1997]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 30 13:48:00.439477 waagent[1997]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 30 13:48:00.439477 waagent[1997]: 2: eth0 inet 10.200.8.41/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 30 13:48:00.439477 waagent[1997]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 30 13:48:00.439477 waagent[1997]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 30 13:48:00.439477 waagent[1997]: 2: eth0 inet6 fe80::20d:3aff:feb4:9355/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:48:00.439477 waagent[1997]: 3: enP29028s1 inet6 fe80::20d:3aff:feb4:9355/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 30 13:48:00.533970 waagent[1997]: 2025-01-30T13:48:00.533889Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 30 13:48:00.533970 waagent[1997]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:48:00.533970 waagent[1997]: pkts bytes target prot opt in out source destination Jan 30 13:48:00.533970 waagent[1997]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:48:00.533970 waagent[1997]: pkts bytes target prot opt in out source destination Jan 30 13:48:00.533970 waagent[1997]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:48:00.533970 waagent[1997]: pkts bytes target prot opt in out source destination Jan 30 13:48:00.533970 waagent[1997]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:48:00.533970 waagent[1997]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:48:00.533970 waagent[1997]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:48:00.537227 waagent[1997]: 2025-01-30T13:48:00.537169Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 30 13:48:00.537227 waagent[1997]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:48:00.537227 waagent[1997]: pkts bytes target prot opt in out source destination Jan 30 13:48:00.537227 waagent[1997]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:48:00.537227 waagent[1997]: pkts bytes target prot opt in out source destination Jan 30 13:48:00.537227 waagent[1997]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 30 13:48:00.537227 waagent[1997]: pkts bytes target prot opt in out source destination Jan 30 13:48:00.537227 waagent[1997]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 30 13:48:00.537227 waagent[1997]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 30 13:48:00.537227 waagent[1997]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 30 13:48:00.537626 waagent[1997]: 2025-01-30T13:48:00.537480Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 30 13:48:07.195338 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:48:07.207191 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:07.308179 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:07.320345 (kubelet)[2247]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:07.954744 kubelet[2247]: E0130 13:48:07.954644 2247 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:07.958658 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:07.959631 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:18.195487 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:48:18.202201 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:18.306139 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:18.309503 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:18.478051 chronyd[1757]: Selected source PHC0 Jan 30 13:48:18.851027 kubelet[2269]: E0130 13:48:18.850889 2269 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:18.853579 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:18.853910 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:24.244554 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:48:24.251239 systemd[1]: Started sshd@0-10.200.8.41:22-10.200.16.10:56230.service - OpenSSH per-connection server daemon (10.200.16.10:56230). Jan 30 13:48:24.965885 sshd[2277]: Accepted publickey for core from 10.200.16.10 port 56230 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:24.967455 sshd[2277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:24.972240 systemd-logind[1763]: New session 3 of user core. Jan 30 13:48:24.980213 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:48:25.601264 systemd[1]: Started sshd@1-10.200.8.41:22-10.200.16.10:56234.service - OpenSSH per-connection server daemon (10.200.16.10:56234). Jan 30 13:48:26.317252 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 56234 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:26.318770 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:26.323465 systemd-logind[1763]: New session 4 of user core. Jan 30 13:48:26.334186 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:48:26.905517 sshd[2282]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:26.908670 systemd[1]: sshd@1-10.200.8.41:22-10.200.16.10:56234.service: Deactivated successfully. Jan 30 13:48:26.912922 systemd-logind[1763]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:48:26.914382 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:48:26.915764 systemd-logind[1763]: Removed session 4. Jan 30 13:48:27.021246 systemd[1]: Started sshd@2-10.200.8.41:22-10.200.16.10:53236.service - OpenSSH per-connection server daemon (10.200.16.10:53236). Jan 30 13:48:27.987593 sshd[2290]: Accepted publickey for core from 10.200.16.10 port 53236 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:27.989423 sshd[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:27.994210 systemd-logind[1763]: New session 5 of user core. Jan 30 13:48:28.004221 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:48:28.595455 sshd[2290]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:28.600448 systemd[1]: sshd@2-10.200.8.41:22-10.200.16.10:53236.service: Deactivated successfully. Jan 30 13:48:28.605015 systemd-logind[1763]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:48:28.605790 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:48:28.606754 systemd-logind[1763]: Removed session 5. Jan 30 13:48:28.717535 systemd[1]: Started sshd@3-10.200.8.41:22-10.200.16.10:53252.service - OpenSSH per-connection server daemon (10.200.16.10:53252). Jan 30 13:48:28.945498 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:48:28.957384 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:29.057178 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:29.059148 (kubelet)[2312]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:29.098327 kubelet[2312]: E0130 13:48:29.098264 2312 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:29.101076 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:29.101389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:29.412985 sshd[2298]: Accepted publickey for core from 10.200.16.10 port 53252 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:29.414739 sshd[2298]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:29.419007 systemd-logind[1763]: New session 6 of user core. Jan 30 13:48:29.425227 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:48:29.894363 sshd[2298]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:29.898563 systemd[1]: sshd@3-10.200.8.41:22-10.200.16.10:53252.service: Deactivated successfully. Jan 30 13:48:29.902312 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:48:29.903204 systemd-logind[1763]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:48:29.904273 systemd-logind[1763]: Removed session 6. Jan 30 13:48:30.016316 systemd[1]: Started sshd@4-10.200.8.41:22-10.200.16.10:53260.service - OpenSSH per-connection server daemon (10.200.16.10:53260). Jan 30 13:48:30.684579 sshd[2327]: Accepted publickey for core from 10.200.16.10 port 53260 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:30.686448 sshd[2327]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:30.692022 systemd-logind[1763]: New session 7 of user core. Jan 30 13:48:30.699190 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:48:31.151428 sudo[2331]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:48:31.151825 sudo[2331]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:31.184467 sudo[2331]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:31.316777 sshd[2327]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:31.322834 systemd[1]: sshd@4-10.200.8.41:22-10.200.16.10:53260.service: Deactivated successfully. Jan 30 13:48:31.326696 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:48:31.327333 systemd-logind[1763]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:48:31.328666 systemd-logind[1763]: Removed session 7. Jan 30 13:48:31.449353 systemd[1]: Started sshd@5-10.200.8.41:22-10.200.16.10:53268.service - OpenSSH per-connection server daemon (10.200.16.10:53268). Jan 30 13:48:32.118120 sshd[2336]: Accepted publickey for core from 10.200.16.10 port 53268 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:32.120005 sshd[2336]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:32.125580 systemd-logind[1763]: New session 8 of user core. Jan 30 13:48:32.132263 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:48:32.486981 sudo[2341]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:48:32.487350 sudo[2341]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:32.490654 sudo[2341]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:32.495747 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:48:32.496117 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:32.508400 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:48:32.511193 auditctl[2344]: No rules Jan 30 13:48:32.511543 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:48:32.511797 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:48:32.521362 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:48:32.542555 augenrules[2363]: No rules Jan 30 13:48:32.544153 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:48:32.547106 sudo[2340]: pam_unix(sudo:session): session closed for user root Jan 30 13:48:32.660050 sshd[2336]: pam_unix(sshd:session): session closed for user core Jan 30 13:48:32.666294 systemd[1]: sshd@5-10.200.8.41:22-10.200.16.10:53268.service: Deactivated successfully. Jan 30 13:48:32.669541 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:48:32.670285 systemd-logind[1763]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:48:32.671190 systemd-logind[1763]: Removed session 8. Jan 30 13:48:32.776625 systemd[1]: Started sshd@6-10.200.8.41:22-10.200.16.10:53284.service - OpenSSH per-connection server daemon (10.200.16.10:53284). Jan 30 13:48:33.471892 sshd[2372]: Accepted publickey for core from 10.200.16.10 port 53284 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:48:33.473699 sshd[2372]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:48:33.478482 systemd-logind[1763]: New session 9 of user core. Jan 30 13:48:33.489219 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:48:33.940327 sudo[2376]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:48:33.940707 sudo[2376]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:48:35.086452 (dockerd)[2391]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:48:35.086478 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:48:35.500484 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 30 13:48:36.270414 dockerd[2391]: time="2025-01-30T13:48:36.270346204Z" level=info msg="Starting up" Jan 30 13:48:36.730281 dockerd[2391]: time="2025-01-30T13:48:36.730232536Z" level=info msg="Loading containers: start." Jan 30 13:48:36.878118 kernel: Initializing XFRM netlink socket Jan 30 13:48:37.057513 systemd-networkd[1361]: docker0: Link UP Jan 30 13:48:37.087173 dockerd[2391]: time="2025-01-30T13:48:37.087137228Z" level=info msg="Loading containers: done." Jan 30 13:48:37.137450 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1203507626-merged.mount: Deactivated successfully. Jan 30 13:48:37.148874 dockerd[2391]: time="2025-01-30T13:48:37.148827089Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:48:37.149017 dockerd[2391]: time="2025-01-30T13:48:37.148968693Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:48:37.149155 dockerd[2391]: time="2025-01-30T13:48:37.149126998Z" level=info msg="Daemon has completed initialization" Jan 30 13:48:37.207296 dockerd[2391]: time="2025-01-30T13:48:37.207230157Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:48:37.207729 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:48:39.195378 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:48:39.208214 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:39.893185 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:39.904359 (kubelet)[2543]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:39.952242 kubelet[2543]: E0130 13:48:39.952139 2543 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:39.961993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:39.962431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:40.092212 containerd[1785]: time="2025-01-30T13:48:40.092172945Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:48:40.550870 update_engine[1768]: I20250130 13:48:40.550791 1768 update_attempter.cc:509] Updating boot flags... Jan 30 13:48:40.608358 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2564) Jan 30 13:48:40.733971 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2563) Jan 30 13:48:40.760247 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1392994379.mount: Deactivated successfully. Jan 30 13:48:40.858973 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2563) Jan 30 13:48:42.565817 containerd[1785]: time="2025-01-30T13:48:42.565753111Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:42.568920 containerd[1785]: time="2025-01-30T13:48:42.568856804Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677020" Jan 30 13:48:42.573259 containerd[1785]: time="2025-01-30T13:48:42.573194234Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:42.577544 containerd[1785]: time="2025-01-30T13:48:42.577485562Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:42.578610 containerd[1785]: time="2025-01-30T13:48:42.578492092Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 2.486278846s" Jan 30 13:48:42.578610 containerd[1785]: time="2025-01-30T13:48:42.578539693Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:48:42.604141 containerd[1785]: time="2025-01-30T13:48:42.604093857Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:48:44.311384 containerd[1785]: time="2025-01-30T13:48:44.311327098Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:44.315014 containerd[1785]: time="2025-01-30T13:48:44.314962107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605753" Jan 30 13:48:44.318884 containerd[1785]: time="2025-01-30T13:48:44.318830223Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:44.327375 containerd[1785]: time="2025-01-30T13:48:44.327228474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:44.328716 containerd[1785]: time="2025-01-30T13:48:44.328559914Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.724422655s" Jan 30 13:48:44.328716 containerd[1785]: time="2025-01-30T13:48:44.328601615Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:48:44.351318 containerd[1785]: time="2025-01-30T13:48:44.351279893Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:48:45.670932 containerd[1785]: time="2025-01-30T13:48:45.670877445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:45.673720 containerd[1785]: time="2025-01-30T13:48:45.673657828Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783072" Jan 30 13:48:45.677358 containerd[1785]: time="2025-01-30T13:48:45.677307437Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:45.683473 containerd[1785]: time="2025-01-30T13:48:45.683339917Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:45.684982 containerd[1785]: time="2025-01-30T13:48:45.684827062Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.333510767s" Jan 30 13:48:45.684982 containerd[1785]: time="2025-01-30T13:48:45.684868163Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:48:45.707040 containerd[1785]: time="2025-01-30T13:48:45.707004725Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:48:47.011109 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount582586443.mount: Deactivated successfully. Jan 30 13:48:47.485847 containerd[1785]: time="2025-01-30T13:48:47.485778704Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:47.488472 containerd[1785]: time="2025-01-30T13:48:47.488403583Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058345" Jan 30 13:48:47.492433 containerd[1785]: time="2025-01-30T13:48:47.492371102Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:47.497224 containerd[1785]: time="2025-01-30T13:48:47.497171145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:47.498073 containerd[1785]: time="2025-01-30T13:48:47.497742562Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.790667036s" Jan 30 13:48:47.498073 containerd[1785]: time="2025-01-30T13:48:47.497782263Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:48:47.521629 containerd[1785]: time="2025-01-30T13:48:47.521560974Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:48:48.135739 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1998894416.mount: Deactivated successfully. Jan 30 13:48:49.446193 containerd[1785]: time="2025-01-30T13:48:49.446132324Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:49.448903 containerd[1785]: time="2025-01-30T13:48:49.448828884Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 30 13:48:49.453234 containerd[1785]: time="2025-01-30T13:48:49.453180081Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:49.463633 containerd[1785]: time="2025-01-30T13:48:49.463582213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:49.464928 containerd[1785]: time="2025-01-30T13:48:49.464738839Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.942987658s" Jan 30 13:48:49.464928 containerd[1785]: time="2025-01-30T13:48:49.464817440Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:48:49.487992 containerd[1785]: time="2025-01-30T13:48:49.487962656Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:48:50.068359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 30 13:48:50.077295 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:50.082506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032096188.mount: Deactivated successfully. Jan 30 13:48:50.194209 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:50.196156 (kubelet)[2798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:48:50.326661 containerd[1785]: time="2025-01-30T13:48:50.326320121Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:50.722561 containerd[1785]: time="2025-01-30T13:48:50.722445062Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 30 13:48:50.728971 containerd[1785]: time="2025-01-30T13:48:50.727813196Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:50.734529 containerd[1785]: time="2025-01-30T13:48:50.734495063Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:50.735816 containerd[1785]: time="2025-01-30T13:48:50.735303683Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 1.247170024s" Jan 30 13:48:50.736067 containerd[1785]: time="2025-01-30T13:48:50.736045502Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:48:50.760122 kubelet[2798]: E0130 13:48:50.760050 2798 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:48:50.762628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:48:50.763022 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:48:50.770752 containerd[1785]: time="2025-01-30T13:48:50.770517262Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:48:51.386268 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3787499775.mount: Deactivated successfully. Jan 30 13:48:53.688536 containerd[1785]: time="2025-01-30T13:48:53.688478503Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:53.691436 containerd[1785]: time="2025-01-30T13:48:53.691366675Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 30 13:48:53.697081 containerd[1785]: time="2025-01-30T13:48:53.697023416Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:53.701245 containerd[1785]: time="2025-01-30T13:48:53.701194020Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:48:53.702451 containerd[1785]: time="2025-01-30T13:48:53.702264147Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.931706284s" Jan 30 13:48:53.702451 containerd[1785]: time="2025-01-30T13:48:53.702304248Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:48:56.640757 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:56.649250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:56.679507 systemd[1]: Reloading requested from client PID 2923 ('systemctl') (unit session-9.scope)... Jan 30 13:48:56.679660 systemd[1]: Reloading... Jan 30 13:48:56.791980 zram_generator::config[2963]: No configuration found. Jan 30 13:48:56.932553 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:48:57.015395 systemd[1]: Reloading finished in 335 ms. Jan 30 13:48:57.070270 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:48:57.070377 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:48:57.071236 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:57.077237 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:48:57.295029 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:48:57.299350 (kubelet)[3045]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:48:57.337914 kubelet[3045]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:57.337914 kubelet[3045]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:48:57.337914 kubelet[3045]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:48:57.922982 kubelet[3045]: I0130 13:48:57.922034 3045 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:48:58.256529 kubelet[3045]: I0130 13:48:58.256485 3045 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:48:58.256719 kubelet[3045]: I0130 13:48:58.256569 3045 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:48:58.256899 kubelet[3045]: I0130 13:48:58.256876 3045 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:48:58.273074 kubelet[3045]: I0130 13:48:58.272524 3045 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:48:58.273215 kubelet[3045]: E0130 13:48:58.273165 3045 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.285368 kubelet[3045]: I0130 13:48:58.285340 3045 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:48:58.286792 kubelet[3045]: I0130 13:48:58.286742 3045 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:48:58.287025 kubelet[3045]: I0130 13:48:58.286792 3045 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-95297e853e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:48:58.287197 kubelet[3045]: I0130 13:48:58.287041 3045 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:48:58.287197 kubelet[3045]: I0130 13:48:58.287057 3045 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:48:58.287277 kubelet[3045]: I0130 13:48:58.287211 3045 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:58.288107 kubelet[3045]: I0130 13:48:58.288089 3045 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:48:58.288192 kubelet[3045]: I0130 13:48:58.288111 3045 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:48:58.288192 kubelet[3045]: I0130 13:48:58.288141 3045 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:48:58.288192 kubelet[3045]: I0130 13:48:58.288167 3045 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:48:58.292846 kubelet[3045]: W0130 13:48:58.292548 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.292846 kubelet[3045]: E0130 13:48:58.292621 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.292846 kubelet[3045]: W0130 13:48:58.292695 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-95297e853e&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.292846 kubelet[3045]: E0130 13:48:58.292732 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-95297e853e&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.293441 kubelet[3045]: I0130 13:48:58.293222 3045 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:48:58.295924 kubelet[3045]: I0130 13:48:58.294800 3045 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:48:58.295924 kubelet[3045]: W0130 13:48:58.294875 3045 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:48:58.295924 kubelet[3045]: I0130 13:48:58.295788 3045 server.go:1264] "Started kubelet" Jan 30 13:48:58.305846 kubelet[3045]: I0130 13:48:58.305686 3045 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:48:58.307733 kubelet[3045]: I0130 13:48:58.307504 3045 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:48:58.308549 kubelet[3045]: I0130 13:48:58.308408 3045 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:48:58.309765 kubelet[3045]: I0130 13:48:58.309745 3045 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:48:58.310576 kubelet[3045]: I0130 13:48:58.310557 3045 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:48:58.310652 kubelet[3045]: I0130 13:48:58.310641 3045 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:48:58.311747 kubelet[3045]: I0130 13:48:58.311647 3045 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:48:58.312992 kubelet[3045]: I0130 13:48:58.312970 3045 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:48:58.316179 kubelet[3045]: E0130 13:48:58.316033 3045 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.41:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.41:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-95297e853e.181f7c8fcfbd5876 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-95297e853e,UID:ci-4081.3.0-a-95297e853e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-95297e853e,},FirstTimestamp:2025-01-30 13:48:58.295761014 +0000 UTC m=+0.992365573,LastTimestamp:2025-01-30 13:48:58.295761014 +0000 UTC m=+0.992365573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-95297e853e,}" Jan 30 13:48:58.317562 kubelet[3045]: I0130 13:48:58.317541 3045 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:48:58.317935 kubelet[3045]: I0130 13:48:58.317744 3045 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:48:58.319571 kubelet[3045]: E0130 13:48:58.318428 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-95297e853e?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="200ms" Jan 30 13:48:58.319571 kubelet[3045]: W0130 13:48:58.318519 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.319571 kubelet[3045]: E0130 13:48:58.318571 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.320796 kubelet[3045]: E0130 13:48:58.320764 3045 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:48:58.321026 kubelet[3045]: I0130 13:48:58.321006 3045 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:48:58.337991 kubelet[3045]: I0130 13:48:58.337960 3045 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:48:58.339864 kubelet[3045]: I0130 13:48:58.339844 3045 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:48:58.340006 kubelet[3045]: I0130 13:48:58.339996 3045 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:48:58.340110 kubelet[3045]: I0130 13:48:58.340101 3045 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:48:58.340244 kubelet[3045]: E0130 13:48:58.340207 3045 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:48:58.348972 kubelet[3045]: W0130 13:48:58.348933 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.349097 kubelet[3045]: E0130 13:48:58.349085 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:58.363280 kubelet[3045]: I0130 13:48:58.363260 3045 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:48:58.363280 kubelet[3045]: I0130 13:48:58.363279 3045 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:48:58.363408 kubelet[3045]: I0130 13:48:58.363298 3045 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:48:58.369602 kubelet[3045]: I0130 13:48:58.369583 3045 policy_none.go:49] "None policy: Start" Jan 30 13:48:58.370420 kubelet[3045]: I0130 13:48:58.370173 3045 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:48:58.370420 kubelet[3045]: I0130 13:48:58.370200 3045 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:48:58.378120 kubelet[3045]: I0130 13:48:58.378104 3045 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:48:58.378347 kubelet[3045]: I0130 13:48:58.378322 3045 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:48:58.378981 kubelet[3045]: I0130 13:48:58.378455 3045 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:48:58.382602 kubelet[3045]: E0130 13:48:58.382577 3045 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-95297e853e\" not found" Jan 30 13:48:58.412585 kubelet[3045]: I0130 13:48:58.412527 3045 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.412993 kubelet[3045]: E0130 13:48:58.412934 3045 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.440809 kubelet[3045]: I0130 13:48:58.440704 3045 topology_manager.go:215] "Topology Admit Handler" podUID="c6be35ed39095b90b4775fe241b24732" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.443110 kubelet[3045]: I0130 13:48:58.442972 3045 topology_manager.go:215] "Topology Admit Handler" podUID="6d2b1a68abcd601ef7dae26e1662e1bc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.444778 kubelet[3045]: I0130 13:48:58.444599 3045 topology_manager.go:215] "Topology Admit Handler" podUID="87f68da6677a21dd5371a77da831e677" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.519632 kubelet[3045]: E0130 13:48:58.519501 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-95297e853e?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="400ms" Jan 30 13:48:58.611793 kubelet[3045]: I0130 13:48:58.611730 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6be35ed39095b90b4775fe241b24732-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-95297e853e\" (UID: \"c6be35ed39095b90b4775fe241b24732\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.611793 kubelet[3045]: I0130 13:48:58.611796 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612210 kubelet[3045]: I0130 13:48:58.611823 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612210 kubelet[3045]: I0130 13:48:58.611845 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612210 kubelet[3045]: I0130 13:48:58.611868 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87f68da6677a21dd5371a77da831e677-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-95297e853e\" (UID: \"87f68da6677a21dd5371a77da831e677\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612210 kubelet[3045]: I0130 13:48:58.611886 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6be35ed39095b90b4775fe241b24732-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-95297e853e\" (UID: \"c6be35ed39095b90b4775fe241b24732\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612210 kubelet[3045]: I0130 13:48:58.611906 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6be35ed39095b90b4775fe241b24732-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-95297e853e\" (UID: \"c6be35ed39095b90b4775fe241b24732\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612422 kubelet[3045]: I0130 13:48:58.611926 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.612422 kubelet[3045]: I0130 13:48:58.611980 3045 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.615830 kubelet[3045]: I0130 13:48:58.615774 3045 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.619972 kubelet[3045]: E0130 13:48:58.618612 3045 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:58.752014 containerd[1785]: time="2025-01-30T13:48:58.751937971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-95297e853e,Uid:6d2b1a68abcd601ef7dae26e1662e1bc,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:58.752611 containerd[1785]: time="2025-01-30T13:48:58.752366283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-95297e853e,Uid:c6be35ed39095b90b4775fe241b24732,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:58.754166 containerd[1785]: time="2025-01-30T13:48:58.754135532Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-95297e853e,Uid:87f68da6677a21dd5371a77da831e677,Namespace:kube-system,Attempt:0,}" Jan 30 13:48:58.921235 kubelet[3045]: E0130 13:48:58.921117 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-95297e853e?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="800ms" Jan 30 13:48:59.021211 kubelet[3045]: I0130 13:48:59.021153 3045 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:59.021535 kubelet[3045]: E0130 13:48:59.021505 3045 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:59.360403 kubelet[3045]: W0130 13:48:59.360332 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.360403 kubelet[3045]: E0130 13:48:59.360404 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.41:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.370745 kubelet[3045]: W0130 13:48:59.370693 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.370872 kubelet[3045]: E0130 13:48:59.370762 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.41:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.394608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3145111450.mount: Deactivated successfully. Jan 30 13:48:59.423048 containerd[1785]: time="2025-01-30T13:48:59.423008418Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:59.425095 containerd[1785]: time="2025-01-30T13:48:59.425042074Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 30 13:48:59.428565 containerd[1785]: time="2025-01-30T13:48:59.428527570Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:59.431907 containerd[1785]: time="2025-01-30T13:48:59.431874063Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:59.438749 containerd[1785]: time="2025-01-30T13:48:59.438706351Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:48:59.442215 containerd[1785]: time="2025-01-30T13:48:59.442183647Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:59.444260 containerd[1785]: time="2025-01-30T13:48:59.444040799Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:48:59.449295 containerd[1785]: time="2025-01-30T13:48:59.449248143Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:48:59.450297 containerd[1785]: time="2025-01-30T13:48:59.450007264Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 697.551478ms" Jan 30 13:48:59.450974 containerd[1785]: time="2025-01-30T13:48:59.450931489Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 698.868315ms" Jan 30 13:48:59.459637 containerd[1785]: time="2025-01-30T13:48:59.459601029Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 705.408696ms" Jan 30 13:48:59.602901 kubelet[3045]: W0130 13:48:59.602821 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.602901 kubelet[3045]: E0130 13:48:59.602907 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.41:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.722349 kubelet[3045]: E0130 13:48:59.722293 3045 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.41:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-95297e853e?timeout=10s\": dial tcp 10.200.8.41:6443: connect: connection refused" interval="1.6s" Jan 30 13:48:59.823806 kubelet[3045]: I0130 13:48:59.823771 3045 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:59.824176 kubelet[3045]: E0130 13:48:59.824145 3045 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.41:6443/api/v1/nodes\": dial tcp 10.200.8.41:6443: connect: connection refused" node="ci-4081.3.0-a-95297e853e" Jan 30 13:48:59.891814 kubelet[3045]: W0130 13:48:59.891741 3045 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-95297e853e&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:48:59.891814 kubelet[3045]: E0130 13:48:59.891822 3045 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.41:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-95297e853e&limit=500&resourceVersion=0": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:49:00.131408 containerd[1785]: time="2025-01-30T13:49:00.131151888Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:00.131408 containerd[1785]: time="2025-01-30T13:49:00.131216090Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:00.131408 containerd[1785]: time="2025-01-30T13:49:00.131341394Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:00.132344 containerd[1785]: time="2025-01-30T13:49:00.131151788Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:00.132921 containerd[1785]: time="2025-01-30T13:49:00.132829035Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:00.133581 containerd[1785]: time="2025-01-30T13:49:00.133241746Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:00.134178 containerd[1785]: time="2025-01-30T13:49:00.134086470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:00.134178 containerd[1785]: time="2025-01-30T13:49:00.133812262Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:00.134178 containerd[1785]: time="2025-01-30T13:49:00.133875764Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:00.134178 containerd[1785]: time="2025-01-30T13:49:00.133896964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:00.134968 containerd[1785]: time="2025-01-30T13:49:00.134412179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:00.135231 containerd[1785]: time="2025-01-30T13:49:00.134608084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:00.262144 containerd[1785]: time="2025-01-30T13:49:00.262100507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-95297e853e,Uid:c6be35ed39095b90b4775fe241b24732,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0f6cdbf793b77f7846b3db1ec7e16031f02cb73d40300827ff2dd009e189f28\"" Jan 30 13:49:00.264421 containerd[1785]: time="2025-01-30T13:49:00.264357370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-95297e853e,Uid:6d2b1a68abcd601ef7dae26e1662e1bc,Namespace:kube-system,Attempt:0,} returns sandbox id \"dcea4289a891d4829cd0c869750f0c402e08aa3156ae069012b56f2853004238\"" Jan 30 13:49:00.272285 containerd[1785]: time="2025-01-30T13:49:00.271246860Z" level=info msg="CreateContainer within sandbox \"dcea4289a891d4829cd0c869750f0c402e08aa3156ae069012b56f2853004238\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:49:00.274451 containerd[1785]: time="2025-01-30T13:49:00.274419948Z" level=info msg="CreateContainer within sandbox \"c0f6cdbf793b77f7846b3db1ec7e16031f02cb73d40300827ff2dd009e189f28\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:49:00.276224 containerd[1785]: time="2025-01-30T13:49:00.275910889Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-95297e853e,Uid:87f68da6677a21dd5371a77da831e677,Namespace:kube-system,Attempt:0,} returns sandbox id \"3879b0822d86335f4ebcea8ab06302f2cdad0216c84f310a55f8c651f46daa21\"" Jan 30 13:49:00.279432 containerd[1785]: time="2025-01-30T13:49:00.279406986Z" level=info msg="CreateContainer within sandbox \"3879b0822d86335f4ebcea8ab06302f2cdad0216c84f310a55f8c651f46daa21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:49:00.337640 containerd[1785]: time="2025-01-30T13:49:00.337591594Z" level=info msg="CreateContainer within sandbox \"dcea4289a891d4829cd0c869750f0c402e08aa3156ae069012b56f2853004238\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f39d282e26d34e07ba0f176a5dfc053c78e926e41bfb7d7c5e8e2454b3065614\"" Jan 30 13:49:00.338298 containerd[1785]: time="2025-01-30T13:49:00.338267012Z" level=info msg="StartContainer for \"f39d282e26d34e07ba0f176a5dfc053c78e926e41bfb7d7c5e8e2454b3065614\"" Jan 30 13:49:00.348932 containerd[1785]: time="2025-01-30T13:49:00.348894406Z" level=info msg="CreateContainer within sandbox \"3879b0822d86335f4ebcea8ab06302f2cdad0216c84f310a55f8c651f46daa21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9c4c3fbf8ae264b44084929f7b55d40fe0bcb273dcd738044c27ff0fd52ca861\"" Jan 30 13:49:00.349846 containerd[1785]: time="2025-01-30T13:49:00.349821232Z" level=info msg="StartContainer for \"9c4c3fbf8ae264b44084929f7b55d40fe0bcb273dcd738044c27ff0fd52ca861\"" Jan 30 13:49:00.355864 containerd[1785]: time="2025-01-30T13:49:00.355836598Z" level=info msg="CreateContainer within sandbox \"c0f6cdbf793b77f7846b3db1ec7e16031f02cb73d40300827ff2dd009e189f28\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"652adcab5aac545aa55e8304976305ae1e86a45aa326b995e36f3ba1ec1b4225\"" Jan 30 13:49:00.357054 kubelet[3045]: E0130 13:49:00.357025 3045 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.41:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.41:6443: connect: connection refused Jan 30 13:49:00.358731 containerd[1785]: time="2025-01-30T13:49:00.357507944Z" level=info msg="StartContainer for \"652adcab5aac545aa55e8304976305ae1e86a45aa326b995e36f3ba1ec1b4225\"" Jan 30 13:49:00.493472 containerd[1785]: time="2025-01-30T13:49:00.493419500Z" level=info msg="StartContainer for \"f39d282e26d34e07ba0f176a5dfc053c78e926e41bfb7d7c5e8e2454b3065614\" returns successfully" Jan 30 13:49:00.551830 containerd[1785]: time="2025-01-30T13:49:00.551407903Z" level=info msg="StartContainer for \"9c4c3fbf8ae264b44084929f7b55d40fe0bcb273dcd738044c27ff0fd52ca861\" returns successfully" Jan 30 13:49:00.551830 containerd[1785]: time="2025-01-30T13:49:00.551407903Z" level=info msg="StartContainer for \"652adcab5aac545aa55e8304976305ae1e86a45aa326b995e36f3ba1ec1b4225\" returns successfully" Jan 30 13:49:01.428216 kubelet[3045]: I0130 13:49:01.428180 3045 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:49:02.289074 kubelet[3045]: E0130 13:49:02.289013 3045 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.3.0-a-95297e853e\" not found" node="ci-4081.3.0-a-95297e853e" Jan 30 13:49:02.295555 kubelet[3045]: I0130 13:49:02.295505 3045 apiserver.go:52] "Watching apiserver" Jan 30 13:49:02.310898 kubelet[3045]: I0130 13:49:02.310858 3045 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:49:02.449865 kubelet[3045]: I0130 13:49:02.448520 3045 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:49:03.402272 kubelet[3045]: W0130 13:49:03.400680 3045 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:49:04.363343 systemd[1]: Reloading requested from client PID 3321 ('systemctl') (unit session-9.scope)... Jan 30 13:49:04.363363 systemd[1]: Reloading... Jan 30 13:49:04.445002 zram_generator::config[3357]: No configuration found. Jan 30 13:49:04.591787 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:49:04.674721 systemd[1]: Reloading finished in 310 ms. Jan 30 13:49:04.710800 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:04.711627 kubelet[3045]: E0130 13:49:04.711123 3045 event.go:319] "Unable to write event (broadcaster is shut down)" event="&Event{ObjectMeta:{ci-4081.3.0-a-95297e853e.181f7c8fcfbd5876 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-95297e853e,UID:ci-4081.3.0-a-95297e853e,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-95297e853e,},FirstTimestamp:2025-01-30 13:48:58.295761014 +0000 UTC m=+0.992365573,LastTimestamp:2025-01-30 13:48:58.295761014 +0000 UTC m=+0.992365573,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-95297e853e,}" Jan 30 13:49:04.731516 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:49:04.732192 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:04.740610 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:49:04.932865 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:49:04.946355 (kubelet)[3438]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:49:05.383154 kubelet[3438]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:49:05.383154 kubelet[3438]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:49:05.383154 kubelet[3438]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:49:05.383154 kubelet[3438]: I0130 13:49:05.382981 3438 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:49:05.390530 kubelet[3438]: I0130 13:49:05.390488 3438 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:49:05.390530 kubelet[3438]: I0130 13:49:05.390523 3438 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:49:05.391047 kubelet[3438]: I0130 13:49:05.390776 3438 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:49:05.392323 kubelet[3438]: I0130 13:49:05.392297 3438 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:49:05.395144 kubelet[3438]: I0130 13:49:05.395000 3438 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:49:05.408292 kubelet[3438]: I0130 13:49:05.408258 3438 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:49:05.408823 kubelet[3438]: I0130 13:49:05.408779 3438 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:49:05.409419 kubelet[3438]: I0130 13:49:05.408826 3438 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-95297e853e","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:49:05.409419 kubelet[3438]: I0130 13:49:05.409126 3438 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:49:05.409419 kubelet[3438]: I0130 13:49:05.409141 3438 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:49:05.409419 kubelet[3438]: I0130 13:49:05.409193 3438 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:49:05.409419 kubelet[3438]: I0130 13:49:05.409329 3438 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:49:05.410417 kubelet[3438]: I0130 13:49:05.409353 3438 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:49:05.410417 kubelet[3438]: I0130 13:49:05.409824 3438 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:49:05.410417 kubelet[3438]: I0130 13:49:05.409848 3438 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:49:05.415077 kubelet[3438]: I0130 13:49:05.415056 3438 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:49:05.415277 kubelet[3438]: I0130 13:49:05.415262 3438 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:49:05.415738 kubelet[3438]: I0130 13:49:05.415720 3438 server.go:1264] "Started kubelet" Jan 30 13:49:05.421183 kubelet[3438]: I0130 13:49:05.419631 3438 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:49:05.429248 kubelet[3438]: I0130 13:49:05.429217 3438 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:49:05.432795 kubelet[3438]: I0130 13:49:05.432135 3438 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:49:05.434268 kubelet[3438]: I0130 13:49:05.434248 3438 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:49:05.435995 kubelet[3438]: I0130 13:49:05.435973 3438 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:49:05.438856 kubelet[3438]: I0130 13:49:05.432317 3438 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:49:05.438941 kubelet[3438]: I0130 13:49:05.438863 3438 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:49:05.439959 kubelet[3438]: I0130 13:49:05.439381 3438 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:49:05.447452 kubelet[3438]: I0130 13:49:05.447369 3438 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:49:05.447593 kubelet[3438]: I0130 13:49:05.447571 3438 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:49:05.452490 kubelet[3438]: I0130 13:49:05.452391 3438 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:49:05.454135 kubelet[3438]: E0130 13:49:05.453817 3438 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:49:05.479253 kubelet[3438]: I0130 13:49:05.479202 3438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:49:05.483446 kubelet[3438]: I0130 13:49:05.483351 3438 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:49:05.483446 kubelet[3438]: I0130 13:49:05.483387 3438 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:49:05.483446 kubelet[3438]: I0130 13:49:05.483409 3438 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:49:05.483629 kubelet[3438]: E0130 13:49:05.483462 3438 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:49:05.526765 kubelet[3438]: I0130 13:49:05.526397 3438 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:49:05.526765 kubelet[3438]: I0130 13:49:05.526418 3438 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:49:05.526765 kubelet[3438]: I0130 13:49:05.526441 3438 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:49:05.526765 kubelet[3438]: I0130 13:49:05.526614 3438 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:49:05.526765 kubelet[3438]: I0130 13:49:05.526626 3438 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:49:05.526765 kubelet[3438]: I0130 13:49:05.526649 3438 policy_none.go:49] "None policy: Start" Jan 30 13:49:05.528404 kubelet[3438]: I0130 13:49:05.527521 3438 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:49:05.528404 kubelet[3438]: I0130 13:49:05.527546 3438 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:49:05.528404 kubelet[3438]: I0130 13:49:05.527757 3438 state_mem.go:75] "Updated machine memory state" Jan 30 13:49:05.530298 kubelet[3438]: I0130 13:49:05.529051 3438 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:49:05.530298 kubelet[3438]: I0130 13:49:05.529267 3438 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:49:05.530298 kubelet[3438]: I0130 13:49:05.529378 3438 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:49:05.540050 kubelet[3438]: I0130 13:49:05.539760 3438 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.557672 kubelet[3438]: I0130 13:49:05.557610 3438 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.559962 kubelet[3438]: I0130 13:49:05.559750 3438 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.585911 kubelet[3438]: I0130 13:49:05.584506 3438 topology_manager.go:215] "Topology Admit Handler" podUID="6d2b1a68abcd601ef7dae26e1662e1bc" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.585911 kubelet[3438]: I0130 13:49:05.584606 3438 topology_manager.go:215] "Topology Admit Handler" podUID="87f68da6677a21dd5371a77da831e677" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.585911 kubelet[3438]: I0130 13:49:05.584677 3438 topology_manager.go:215] "Topology Admit Handler" podUID="c6be35ed39095b90b4775fe241b24732" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.594792 kubelet[3438]: W0130 13:49:05.594362 3438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:49:05.597521 kubelet[3438]: W0130 13:49:05.597501 3438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:49:05.598320 kubelet[3438]: W0130 13:49:05.598285 3438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:49:05.598393 kubelet[3438]: E0130 13:49:05.598342 3438 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-95297e853e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640326 kubelet[3438]: I0130 13:49:05.640069 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640326 kubelet[3438]: I0130 13:49:05.640168 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640326 kubelet[3438]: I0130 13:49:05.640200 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640326 kubelet[3438]: I0130 13:49:05.640258 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c6be35ed39095b90b4775fe241b24732-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-95297e853e\" (UID: \"c6be35ed39095b90b4775fe241b24732\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640326 kubelet[3438]: I0130 13:49:05.640283 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640609 kubelet[3438]: I0130 13:49:05.640412 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6d2b1a68abcd601ef7dae26e1662e1bc-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-95297e853e\" (UID: \"6d2b1a68abcd601ef7dae26e1662e1bc\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640609 kubelet[3438]: I0130 13:49:05.640449 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/87f68da6677a21dd5371a77da831e677-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-95297e853e\" (UID: \"87f68da6677a21dd5371a77da831e677\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640609 kubelet[3438]: I0130 13:49:05.640567 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c6be35ed39095b90b4775fe241b24732-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-95297e853e\" (UID: \"c6be35ed39095b90b4775fe241b24732\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:49:05.640609 kubelet[3438]: I0130 13:49:05.640600 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c6be35ed39095b90b4775fe241b24732-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-95297e853e\" (UID: \"c6be35ed39095b90b4775fe241b24732\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:49:06.416970 kubelet[3438]: I0130 13:49:06.413845 3438 apiserver.go:52] "Watching apiserver" Jan 30 13:49:06.436237 kubelet[3438]: I0130 13:49:06.436185 3438 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:49:06.531164 kubelet[3438]: W0130 13:49:06.531132 3438 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 30 13:49:06.532402 kubelet[3438]: E0130 13:49:06.531265 3438 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-95297e853e\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" Jan 30 13:49:06.645315 kubelet[3438]: I0130 13:49:06.645234 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.3.0-a-95297e853e" podStartSLOduration=3.645207838 podStartE2EDuration="3.645207838s" podCreationTimestamp="2025-01-30 13:49:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:06.598632533 +0000 UTC m=+1.648900675" watchObservedRunningTime="2025-01-30 13:49:06.645207838 +0000 UTC m=+1.695475880" Jan 30 13:49:06.674829 kubelet[3438]: I0130 13:49:06.674631 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-95297e853e" podStartSLOduration=1.6746049360000002 podStartE2EDuration="1.674604936s" podCreationTimestamp="2025-01-30 13:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:06.645605648 +0000 UTC m=+1.695873790" watchObservedRunningTime="2025-01-30 13:49:06.674604936 +0000 UTC m=+1.724873078" Jan 30 13:49:06.708349 kubelet[3438]: I0130 13:49:06.708261 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-95297e853e" podStartSLOduration=1.708234034 podStartE2EDuration="1.708234034s" podCreationTimestamp="2025-01-30 13:49:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:06.680577578 +0000 UTC m=+1.730845720" watchObservedRunningTime="2025-01-30 13:49:06.708234034 +0000 UTC m=+1.758502076" Jan 30 13:49:10.622186 sudo[2376]: pam_unix(sudo:session): session closed for user root Jan 30 13:49:10.731347 sshd[2372]: pam_unix(sshd:session): session closed for user core Jan 30 13:49:10.737083 systemd[1]: sshd@6-10.200.8.41:22-10.200.16.10:53284.service: Deactivated successfully. Jan 30 13:49:10.740277 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:49:10.740441 systemd-logind[1763]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:49:10.742468 systemd-logind[1763]: Removed session 9. Jan 30 13:49:19.416066 kubelet[3438]: I0130 13:49:19.415942 3438 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:49:19.416927 containerd[1785]: time="2025-01-30T13:49:19.416883907Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:49:19.417589 kubelet[3438]: I0130 13:49:19.417153 3438 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:49:19.865501 kubelet[3438]: I0130 13:49:19.865439 3438 topology_manager.go:215] "Topology Admit Handler" podUID="459e5864-2767-473a-b3c7-d6dd1a9359bf" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-4qqxq" Jan 30 13:49:19.931232 kubelet[3438]: I0130 13:49:19.931193 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/459e5864-2767-473a-b3c7-d6dd1a9359bf-var-lib-calico\") pod \"tigera-operator-7bc55997bb-4qqxq\" (UID: \"459e5864-2767-473a-b3c7-d6dd1a9359bf\") " pod="tigera-operator/tigera-operator-7bc55997bb-4qqxq" Jan 30 13:49:19.931232 kubelet[3438]: I0130 13:49:19.931233 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvfb\" (UniqueName: \"kubernetes.io/projected/459e5864-2767-473a-b3c7-d6dd1a9359bf-kube-api-access-hzvfb\") pod \"tigera-operator-7bc55997bb-4qqxq\" (UID: \"459e5864-2767-473a-b3c7-d6dd1a9359bf\") " pod="tigera-operator/tigera-operator-7bc55997bb-4qqxq" Jan 30 13:49:20.036413 kubelet[3438]: E0130 13:49:20.036355 3438 projected.go:294] Couldn't get configMap tigera-operator/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 30 13:49:20.036413 kubelet[3438]: E0130 13:49:20.036403 3438 projected.go:200] Error preparing data for projected volume kube-api-access-hzvfb for pod tigera-operator/tigera-operator-7bc55997bb-4qqxq: configmap "kube-root-ca.crt" not found Jan 30 13:49:20.036655 kubelet[3438]: E0130 13:49:20.036491 3438 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/459e5864-2767-473a-b3c7-d6dd1a9359bf-kube-api-access-hzvfb podName:459e5864-2767-473a-b3c7-d6dd1a9359bf nodeName:}" failed. No retries permitted until 2025-01-30 13:49:20.536457841 +0000 UTC m=+15.586725983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-hzvfb" (UniqueName: "kubernetes.io/projected/459e5864-2767-473a-b3c7-d6dd1a9359bf-kube-api-access-hzvfb") pod "tigera-operator-7bc55997bb-4qqxq" (UID: "459e5864-2767-473a-b3c7-d6dd1a9359bf") : configmap "kube-root-ca.crt" not found Jan 30 13:49:20.247979 kubelet[3438]: I0130 13:49:20.243865 3438 topology_manager.go:215] "Topology Admit Handler" podUID="514cde87-df2e-4534-b09d-d787bb54c2e9" podNamespace="kube-system" podName="kube-proxy-dx9g6" Jan 30 13:49:20.334404 kubelet[3438]: I0130 13:49:20.334361 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/514cde87-df2e-4534-b09d-d787bb54c2e9-kube-proxy\") pod \"kube-proxy-dx9g6\" (UID: \"514cde87-df2e-4534-b09d-d787bb54c2e9\") " pod="kube-system/kube-proxy-dx9g6" Jan 30 13:49:20.334589 kubelet[3438]: I0130 13:49:20.334422 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gtt7j\" (UniqueName: \"kubernetes.io/projected/514cde87-df2e-4534-b09d-d787bb54c2e9-kube-api-access-gtt7j\") pod \"kube-proxy-dx9g6\" (UID: \"514cde87-df2e-4534-b09d-d787bb54c2e9\") " pod="kube-system/kube-proxy-dx9g6" Jan 30 13:49:20.334589 kubelet[3438]: I0130 13:49:20.334471 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/514cde87-df2e-4534-b09d-d787bb54c2e9-xtables-lock\") pod \"kube-proxy-dx9g6\" (UID: \"514cde87-df2e-4534-b09d-d787bb54c2e9\") " pod="kube-system/kube-proxy-dx9g6" Jan 30 13:49:20.334589 kubelet[3438]: I0130 13:49:20.334494 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/514cde87-df2e-4534-b09d-d787bb54c2e9-lib-modules\") pod \"kube-proxy-dx9g6\" (UID: \"514cde87-df2e-4534-b09d-d787bb54c2e9\") " pod="kube-system/kube-proxy-dx9g6" Jan 30 13:49:20.553218 containerd[1785]: time="2025-01-30T13:49:20.553083943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dx9g6,Uid:514cde87-df2e-4534-b09d-d787bb54c2e9,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:20.775983 containerd[1785]: time="2025-01-30T13:49:20.775924722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4qqxq,Uid:459e5864-2767-473a-b3c7-d6dd1a9359bf,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:49:21.505438 containerd[1785]: time="2025-01-30T13:49:21.505303855Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:21.505668 containerd[1785]: time="2025-01-30T13:49:21.505523160Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:21.505807 containerd[1785]: time="2025-01-30T13:49:21.505769266Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.507215 containerd[1785]: time="2025-01-30T13:49:21.507162501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.521274 containerd[1785]: time="2025-01-30T13:49:21.520990941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:21.521274 containerd[1785]: time="2025-01-30T13:49:21.521149245Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:21.521440 containerd[1785]: time="2025-01-30T13:49:21.521311949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.521665 containerd[1785]: time="2025-01-30T13:49:21.521614056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:21.572931 containerd[1785]: time="2025-01-30T13:49:21.572885517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dx9g6,Uid:514cde87-df2e-4534-b09d-d787bb54c2e9,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac7675c7344d2c1428da0e02c7dce803b9d8430b5f5726c756bb6fd00ca61247\"" Jan 30 13:49:21.576582 containerd[1785]: time="2025-01-30T13:49:21.576548007Z" level=info msg="CreateContainer within sandbox \"ac7675c7344d2c1428da0e02c7dce803b9d8430b5f5726c756bb6fd00ca61247\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:49:21.599508 containerd[1785]: time="2025-01-30T13:49:21.599345267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-4qqxq,Uid:459e5864-2767-473a-b3c7-d6dd1a9359bf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"e7e3eaa403d8f3c3f03bbbcd28025fc2327ff6556222af5c73f1d0a12aff7465\"" Jan 30 13:49:21.601450 containerd[1785]: time="2025-01-30T13:49:21.601318616Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:49:21.624515 containerd[1785]: time="2025-01-30T13:49:21.624475285Z" level=info msg="CreateContainer within sandbox \"ac7675c7344d2c1428da0e02c7dce803b9d8430b5f5726c756bb6fd00ca61247\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"63ef08e6d83fc0d91856a7ff88b26e4b98e5bc9743229290423d3d4e1b90814d\"" Jan 30 13:49:21.626382 containerd[1785]: time="2025-01-30T13:49:21.625126001Z" level=info msg="StartContainer for \"63ef08e6d83fc0d91856a7ff88b26e4b98e5bc9743229290423d3d4e1b90814d\"" Jan 30 13:49:21.681994 containerd[1785]: time="2025-01-30T13:49:21.681851996Z" level=info msg="StartContainer for \"63ef08e6d83fc0d91856a7ff88b26e4b98e5bc9743229290423d3d4e1b90814d\" returns successfully" Jan 30 13:49:22.558439 kubelet[3438]: I0130 13:49:22.558379 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dx9g6" podStartSLOduration=2.558353746 podStartE2EDuration="2.558353746s" podCreationTimestamp="2025-01-30 13:49:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:22.558169842 +0000 UTC m=+17.608437984" watchObservedRunningTime="2025-01-30 13:49:22.558353746 +0000 UTC m=+17.608621788" Jan 30 13:49:23.019978 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3551419385.mount: Deactivated successfully. Jan 30 13:49:24.911211 containerd[1785]: time="2025-01-30T13:49:24.911156160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.915914 containerd[1785]: time="2025-01-30T13:49:24.915853773Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:49:24.919319 containerd[1785]: time="2025-01-30T13:49:24.919219654Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.925646 containerd[1785]: time="2025-01-30T13:49:24.925610408Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:24.926861 containerd[1785]: time="2025-01-30T13:49:24.926360526Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 3.32500471s" Jan 30 13:49:24.926861 containerd[1785]: time="2025-01-30T13:49:24.926398127Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:49:24.928959 containerd[1785]: time="2025-01-30T13:49:24.928660781Z" level=info msg="CreateContainer within sandbox \"e7e3eaa403d8f3c3f03bbbcd28025fc2327ff6556222af5c73f1d0a12aff7465\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:49:24.958482 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3889431476.mount: Deactivated successfully. Jan 30 13:49:24.966636 containerd[1785]: time="2025-01-30T13:49:24.966589896Z" level=info msg="CreateContainer within sandbox \"e7e3eaa403d8f3c3f03bbbcd28025fc2327ff6556222af5c73f1d0a12aff7465\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"3e6d75ca661b63d6def570f477b5bc45819d489c231ef979d726c1e93c0a6b44\"" Jan 30 13:49:24.967597 containerd[1785]: time="2025-01-30T13:49:24.967219411Z" level=info msg="StartContainer for \"3e6d75ca661b63d6def570f477b5bc45819d489c231ef979d726c1e93c0a6b44\"" Jan 30 13:49:25.024243 containerd[1785]: time="2025-01-30T13:49:25.024132683Z" level=info msg="StartContainer for \"3e6d75ca661b63d6def570f477b5bc45819d489c231ef979d726c1e93c0a6b44\" returns successfully" Jan 30 13:49:25.555793 kubelet[3438]: I0130 13:49:25.555605 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-4qqxq" podStartSLOduration=3.228954842 podStartE2EDuration="6.555560191s" podCreationTimestamp="2025-01-30 13:49:19 +0000 UTC" firstStartedPulling="2025-01-30 13:49:21.600654099 +0000 UTC m=+16.650922141" lastFinishedPulling="2025-01-30 13:49:24.927259348 +0000 UTC m=+19.977527490" observedRunningTime="2025-01-30 13:49:25.555258484 +0000 UTC m=+20.605526626" watchObservedRunningTime="2025-01-30 13:49:25.555560191 +0000 UTC m=+20.605828233" Jan 30 13:49:25.950277 systemd[1]: run-containerd-runc-k8s.io-3e6d75ca661b63d6def570f477b5bc45819d489c231ef979d726c1e93c0a6b44-runc.RJO9vv.mount: Deactivated successfully. Jan 30 13:49:28.095198 kubelet[3438]: I0130 13:49:28.095133 3438 topology_manager.go:215] "Topology Admit Handler" podUID="950808a5-0a24-4950-9d70-df93e5313caf" podNamespace="calico-system" podName="calico-typha-66568cf68-l88pv" Jan 30 13:49:28.179135 kubelet[3438]: I0130 13:49:28.178955 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtl5j\" (UniqueName: \"kubernetes.io/projected/950808a5-0a24-4950-9d70-df93e5313caf-kube-api-access-wtl5j\") pod \"calico-typha-66568cf68-l88pv\" (UID: \"950808a5-0a24-4950-9d70-df93e5313caf\") " pod="calico-system/calico-typha-66568cf68-l88pv" Jan 30 13:49:28.179135 kubelet[3438]: I0130 13:49:28.179021 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/950808a5-0a24-4950-9d70-df93e5313caf-typha-certs\") pod \"calico-typha-66568cf68-l88pv\" (UID: \"950808a5-0a24-4950-9d70-df93e5313caf\") " pod="calico-system/calico-typha-66568cf68-l88pv" Jan 30 13:49:28.179135 kubelet[3438]: I0130 13:49:28.179055 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/950808a5-0a24-4950-9d70-df93e5313caf-tigera-ca-bundle\") pod \"calico-typha-66568cf68-l88pv\" (UID: \"950808a5-0a24-4950-9d70-df93e5313caf\") " pod="calico-system/calico-typha-66568cf68-l88pv" Jan 30 13:49:28.407452 kubelet[3438]: I0130 13:49:28.407303 3438 topology_manager.go:215] "Topology Admit Handler" podUID="5c0c65bb-0726-430f-8020-b9a609a07a83" podNamespace="calico-system" podName="calico-node-zj249" Jan 30 13:49:28.412658 containerd[1785]: time="2025-01-30T13:49:28.412370549Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66568cf68-l88pv,Uid:950808a5-0a24-4950-9d70-df93e5313caf,Namespace:calico-system,Attempt:0,}" Jan 30 13:49:28.484069 kubelet[3438]: I0130 13:49:28.481628 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-lib-modules\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484069 kubelet[3438]: I0130 13:49:28.481681 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-cni-bin-dir\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484069 kubelet[3438]: I0130 13:49:28.481709 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-cni-log-dir\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484069 kubelet[3438]: I0130 13:49:28.481734 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-cni-net-dir\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484069 kubelet[3438]: I0130 13:49:28.481759 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-policysync\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484505 kubelet[3438]: I0130 13:49:28.481783 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-var-lib-calico\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484505 kubelet[3438]: I0130 13:49:28.481810 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5c0c65bb-0726-430f-8020-b9a609a07a83-tigera-ca-bundle\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484505 kubelet[3438]: I0130 13:49:28.481832 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-xtables-lock\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484505 kubelet[3438]: I0130 13:49:28.481854 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-var-run-calico\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.484505 kubelet[3438]: I0130 13:49:28.481891 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5c0c65bb-0726-430f-8020-b9a609a07a83-flexvol-driver-host\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.485240 kubelet[3438]: I0130 13:49:28.481932 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mprm7\" (UniqueName: \"kubernetes.io/projected/5c0c65bb-0726-430f-8020-b9a609a07a83-kube-api-access-mprm7\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.485240 kubelet[3438]: I0130 13:49:28.482144 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5c0c65bb-0726-430f-8020-b9a609a07a83-node-certs\") pod \"calico-node-zj249\" (UID: \"5c0c65bb-0726-430f-8020-b9a609a07a83\") " pod="calico-system/calico-node-zj249" Jan 30 13:49:28.488966 containerd[1785]: time="2025-01-30T13:49:28.488047173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:28.488966 containerd[1785]: time="2025-01-30T13:49:28.488128875Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:28.488966 containerd[1785]: time="2025-01-30T13:49:28.488145875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:28.488966 containerd[1785]: time="2025-01-30T13:49:28.488284578Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:28.529260 kubelet[3438]: I0130 13:49:28.529220 3438 topology_manager.go:215] "Topology Admit Handler" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" podNamespace="calico-system" podName="csi-node-driver-pbr7p" Jan 30 13:49:28.532996 kubelet[3438]: E0130 13:49:28.531098 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:28.583977 kubelet[3438]: I0130 13:49:28.583000 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/47432003-ec2a-4f52-b92d-2b12925f250f-registration-dir\") pod \"csi-node-driver-pbr7p\" (UID: \"47432003-ec2a-4f52-b92d-2b12925f250f\") " pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:28.583977 kubelet[3438]: I0130 13:49:28.583033 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bsc2l\" (UniqueName: \"kubernetes.io/projected/47432003-ec2a-4f52-b92d-2b12925f250f-kube-api-access-bsc2l\") pod \"csi-node-driver-pbr7p\" (UID: \"47432003-ec2a-4f52-b92d-2b12925f250f\") " pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:28.583977 kubelet[3438]: I0130 13:49:28.583091 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/47432003-ec2a-4f52-b92d-2b12925f250f-kubelet-dir\") pod \"csi-node-driver-pbr7p\" (UID: \"47432003-ec2a-4f52-b92d-2b12925f250f\") " pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:28.583977 kubelet[3438]: I0130 13:49:28.583131 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/47432003-ec2a-4f52-b92d-2b12925f250f-varrun\") pod \"csi-node-driver-pbr7p\" (UID: \"47432003-ec2a-4f52-b92d-2b12925f250f\") " pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:28.583977 kubelet[3438]: I0130 13:49:28.583145 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/47432003-ec2a-4f52-b92d-2b12925f250f-socket-dir\") pod \"csi-node-driver-pbr7p\" (UID: \"47432003-ec2a-4f52-b92d-2b12925f250f\") " pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:28.587449 kubelet[3438]: E0130 13:49:28.586816 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.587449 kubelet[3438]: W0130 13:49:28.586844 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.587449 kubelet[3438]: E0130 13:49:28.586869 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.587981 kubelet[3438]: E0130 13:49:28.587408 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.587981 kubelet[3438]: W0130 13:49:28.587653 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.587981 kubelet[3438]: E0130 13:49:28.587677 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.591424 kubelet[3438]: E0130 13:49:28.591394 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.592374 kubelet[3438]: W0130 13:49:28.591926 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.592374 kubelet[3438]: E0130 13:49:28.591970 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.592374 kubelet[3438]: E0130 13:49:28.592256 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.592374 kubelet[3438]: W0130 13:49:28.592266 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.592374 kubelet[3438]: E0130 13:49:28.592316 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.592630 kubelet[3438]: E0130 13:49:28.592599 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.592630 kubelet[3438]: W0130 13:49:28.592612 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.592718 kubelet[3438]: E0130 13:49:28.592673 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.593218 kubelet[3438]: E0130 13:49:28.592878 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.593218 kubelet[3438]: W0130 13:49:28.592890 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.593218 kubelet[3438]: E0130 13:49:28.593167 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.593218 kubelet[3438]: W0130 13:49:28.593176 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.593218 kubelet[3438]: E0130 13:49:28.593188 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.593218 kubelet[3438]: E0130 13:49:28.593202 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.593502 kubelet[3438]: E0130 13:49:28.593420 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.593502 kubelet[3438]: W0130 13:49:28.593432 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.594030 kubelet[3438]: E0130 13:49:28.593995 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.594968 kubelet[3438]: E0130 13:49:28.594725 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.595097 kubelet[3438]: W0130 13:49:28.594739 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.595258 kubelet[3438]: E0130 13:49:28.595125 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.596970 kubelet[3438]: E0130 13:49:28.596090 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.596970 kubelet[3438]: W0130 13:49:28.596106 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.596970 kubelet[3438]: E0130 13:49:28.596298 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.597142 kubelet[3438]: E0130 13:49:28.597065 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.597142 kubelet[3438]: W0130 13:49:28.597078 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.597227 kubelet[3438]: E0130 13:49:28.597196 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.598136 kubelet[3438]: E0130 13:49:28.598117 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.598136 kubelet[3438]: W0130 13:49:28.598136 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.598404 kubelet[3438]: E0130 13:49:28.598383 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.599587 kubelet[3438]: E0130 13:49:28.599302 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.599587 kubelet[3438]: W0130 13:49:28.599327 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.600929 kubelet[3438]: E0130 13:49:28.600012 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.600929 kubelet[3438]: W0130 13:49:28.600025 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.601906 kubelet[3438]: E0130 13:49:28.601463 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.601906 kubelet[3438]: W0130 13:49:28.601478 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.601906 kubelet[3438]: E0130 13:49:28.601492 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.601906 kubelet[3438]: E0130 13:49:28.601512 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.604605 kubelet[3438]: E0130 13:49:28.602723 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.604605 kubelet[3438]: W0130 13:49:28.602739 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.604605 kubelet[3438]: E0130 13:49:28.602755 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.606120 kubelet[3438]: E0130 13:49:28.606104 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.606372 kubelet[3438]: W0130 13:49:28.606256 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.606460 kubelet[3438]: E0130 13:49:28.606448 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.606551 kubelet[3438]: E0130 13:49:28.606540 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.617089 kubelet[3438]: E0130 13:49:28.615917 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.617089 kubelet[3438]: W0130 13:49:28.615933 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.617089 kubelet[3438]: E0130 13:49:28.615959 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.630931 containerd[1785]: time="2025-01-30T13:49:28.630894416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-66568cf68-l88pv,Uid:950808a5-0a24-4950-9d70-df93e5313caf,Namespace:calico-system,Attempt:0,} returns sandbox id \"76d07c125d5b17d4932481d2a9437e9dee4493a31d8d327b621d9460881b1408\"" Jan 30 13:49:28.633180 containerd[1785]: time="2025-01-30T13:49:28.633027167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:49:28.684025 kubelet[3438]: E0130 13:49:28.683832 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.684025 kubelet[3438]: W0130 13:49:28.683859 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.684025 kubelet[3438]: E0130 13:49:28.683882 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.686292 kubelet[3438]: E0130 13:49:28.684172 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.686292 kubelet[3438]: W0130 13:49:28.684188 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.686292 kubelet[3438]: E0130 13:49:28.684205 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.686740 kubelet[3438]: E0130 13:49:28.686499 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.686740 kubelet[3438]: W0130 13:49:28.686532 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.686740 kubelet[3438]: E0130 13:49:28.686554 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.687344 kubelet[3438]: E0130 13:49:28.687177 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.687344 kubelet[3438]: W0130 13:49:28.687192 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.687344 kubelet[3438]: E0130 13:49:28.687289 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.687777 kubelet[3438]: E0130 13:49:28.687672 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.687777 kubelet[3438]: W0130 13:49:28.687685 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.688025 kubelet[3438]: E0130 13:49:28.687898 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.688272 kubelet[3438]: E0130 13:49:28.688152 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.688272 kubelet[3438]: W0130 13:49:28.688165 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.688527 kubelet[3438]: E0130 13:49:28.688404 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.688726 kubelet[3438]: E0130 13:49:28.688645 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.688726 kubelet[3438]: W0130 13:49:28.688658 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.689032 kubelet[3438]: E0130 13:49:28.688763 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.689293 kubelet[3438]: E0130 13:49:28.689190 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.689293 kubelet[3438]: W0130 13:49:28.689204 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.689567 kubelet[3438]: E0130 13:49:28.689416 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.689713 kubelet[3438]: E0130 13:49:28.689669 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.689713 kubelet[3438]: W0130 13:49:28.689680 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.689980 kubelet[3438]: E0130 13:49:28.689821 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.690239 kubelet[3438]: E0130 13:49:28.690144 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.690239 kubelet[3438]: W0130 13:49:28.690156 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.690461 kubelet[3438]: E0130 13:49:28.690360 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.690603 kubelet[3438]: E0130 13:49:28.690558 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.690603 kubelet[3438]: W0130 13:49:28.690569 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.691133 kubelet[3438]: E0130 13:49:28.690727 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.691365 kubelet[3438]: E0130 13:49:28.691284 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.691365 kubelet[3438]: W0130 13:49:28.691297 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.691699 kubelet[3438]: E0130 13:49:28.691587 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.692102 kubelet[3438]: E0130 13:49:28.691986 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.692102 kubelet[3438]: W0130 13:49:28.692002 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.692314 kubelet[3438]: E0130 13:49:28.692252 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.692546 kubelet[3438]: E0130 13:49:28.692448 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.692816 kubelet[3438]: W0130 13:49:28.692565 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.693012 kubelet[3438]: E0130 13:49:28.692906 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.693157 kubelet[3438]: E0130 13:49:28.693000 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.693157 kubelet[3438]: W0130 13:49:28.693100 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.693327 kubelet[3438]: E0130 13:49:28.693186 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.693726 kubelet[3438]: E0130 13:49:28.693571 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.693726 kubelet[3438]: W0130 13:49:28.693585 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.693726 kubelet[3438]: E0130 13:49:28.693672 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.694028 kubelet[3438]: E0130 13:49:28.693984 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.694028 kubelet[3438]: W0130 13:49:28.693996 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.694131 kubelet[3438]: E0130 13:49:28.694090 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.694546 kubelet[3438]: E0130 13:49:28.694377 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.694546 kubelet[3438]: W0130 13:49:28.694399 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.694546 kubelet[3438]: E0130 13:49:28.694496 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.695129 kubelet[3438]: E0130 13:49:28.694855 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.695129 kubelet[3438]: W0130 13:49:28.694867 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.695129 kubelet[3438]: E0130 13:49:28.695047 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.695261 kubelet[3438]: E0130 13:49:28.695227 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.695261 kubelet[3438]: W0130 13:49:28.695238 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.695362 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.695543 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.696971 kubelet[3438]: W0130 13:49:28.695555 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.695597 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.696167 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.696971 kubelet[3438]: W0130 13:49:28.696179 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.696293 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.696493 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.696971 kubelet[3438]: W0130 13:49:28.696505 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.696971 kubelet[3438]: E0130 13:49:28.696617 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.697423 kubelet[3438]: E0130 13:49:28.697043 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.697423 kubelet[3438]: W0130 13:49:28.697055 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.697423 kubelet[3438]: E0130 13:49:28.697074 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.697423 kubelet[3438]: E0130 13:49:28.697377 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.697423 kubelet[3438]: W0130 13:49:28.697389 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.697423 kubelet[3438]: E0130 13:49:28.697402 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.706401 kubelet[3438]: E0130 13:49:28.706386 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:28.706563 kubelet[3438]: W0130 13:49:28.706452 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:28.706563 kubelet[3438]: E0130 13:49:28.706469 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:28.717749 containerd[1785]: time="2025-01-30T13:49:28.717336299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zj249,Uid:5c0c65bb-0726-430f-8020-b9a609a07a83,Namespace:calico-system,Attempt:0,}" Jan 30 13:49:28.782335 containerd[1785]: time="2025-01-30T13:49:28.781684950Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:28.782335 containerd[1785]: time="2025-01-30T13:49:28.782362866Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:28.784288 containerd[1785]: time="2025-01-30T13:49:28.783066883Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:28.784288 containerd[1785]: time="2025-01-30T13:49:28.783557895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:28.840555 containerd[1785]: time="2025-01-30T13:49:28.840481967Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zj249,Uid:5c0c65bb-0726-430f-8020-b9a609a07a83,Namespace:calico-system,Attempt:0,} returns sandbox id \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\"" Jan 30 13:49:30.171005 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4129570420.mount: Deactivated successfully. Jan 30 13:49:30.484228 kubelet[3438]: E0130 13:49:30.484161 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:31.229029 containerd[1785]: time="2025-01-30T13:49:31.228975499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:31.232134 containerd[1785]: time="2025-01-30T13:49:31.232078273Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Jan 30 13:49:31.237669 containerd[1785]: time="2025-01-30T13:49:31.237612706Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:31.244079 containerd[1785]: time="2025-01-30T13:49:31.244025359Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:31.245130 containerd[1785]: time="2025-01-30T13:49:31.244640674Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.611554405s" Jan 30 13:49:31.245130 containerd[1785]: time="2025-01-30T13:49:31.244682675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:49:31.247884 containerd[1785]: time="2025-01-30T13:49:31.247624545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:49:31.254087 containerd[1785]: time="2025-01-30T13:49:31.254059499Z" level=info msg="CreateContainer within sandbox \"76d07c125d5b17d4932481d2a9437e9dee4493a31d8d327b621d9460881b1408\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:49:31.303366 containerd[1785]: time="2025-01-30T13:49:31.303313275Z" level=info msg="CreateContainer within sandbox \"76d07c125d5b17d4932481d2a9437e9dee4493a31d8d327b621d9460881b1408\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c5176efc7576288ea73031c4532337551c2ad29d93e530ee100cc67ae6bd7779\"" Jan 30 13:49:31.305003 containerd[1785]: time="2025-01-30T13:49:31.303812187Z" level=info msg="StartContainer for \"c5176efc7576288ea73031c4532337551c2ad29d93e530ee100cc67ae6bd7779\"" Jan 30 13:49:31.384272 containerd[1785]: time="2025-01-30T13:49:31.384223408Z" level=info msg="StartContainer for \"c5176efc7576288ea73031c4532337551c2ad29d93e530ee100cc67ae6bd7779\" returns successfully" Jan 30 13:49:31.592789 kubelet[3438]: I0130 13:49:31.591649 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-66568cf68-l88pv" podStartSLOduration=0.978271015 podStartE2EDuration="3.591625764s" podCreationTimestamp="2025-01-30 13:49:28 +0000 UTC" firstStartedPulling="2025-01-30 13:49:28.63233695 +0000 UTC m=+23.682604992" lastFinishedPulling="2025-01-30 13:49:31.245691699 +0000 UTC m=+26.295959741" observedRunningTime="2025-01-30 13:49:31.591170053 +0000 UTC m=+26.641438195" watchObservedRunningTime="2025-01-30 13:49:31.591625764 +0000 UTC m=+26.641893806" Jan 30 13:49:31.602051 kubelet[3438]: E0130 13:49:31.602021 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.602051 kubelet[3438]: W0130 13:49:31.602052 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.602370 kubelet[3438]: E0130 13:49:31.602077 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.602444 kubelet[3438]: E0130 13:49:31.602388 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.602444 kubelet[3438]: W0130 13:49:31.602402 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.602444 kubelet[3438]: E0130 13:49:31.602421 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.602702 kubelet[3438]: E0130 13:49:31.602685 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.602702 kubelet[3438]: W0130 13:49:31.602699 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.602977 kubelet[3438]: E0130 13:49:31.602713 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.602977 kubelet[3438]: E0130 13:49:31.602913 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.602977 kubelet[3438]: W0130 13:49:31.602932 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.602977 kubelet[3438]: E0130 13:49:31.602961 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.603281 kubelet[3438]: E0130 13:49:31.603198 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.603281 kubelet[3438]: W0130 13:49:31.603209 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.603281 kubelet[3438]: E0130 13:49:31.603223 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.603504 kubelet[3438]: E0130 13:49:31.603424 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.603504 kubelet[3438]: W0130 13:49:31.603436 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.603504 kubelet[3438]: E0130 13:49:31.603449 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.603723 kubelet[3438]: E0130 13:49:31.603638 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.603723 kubelet[3438]: W0130 13:49:31.603649 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.603723 kubelet[3438]: E0130 13:49:31.603663 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.603881 kubelet[3438]: E0130 13:49:31.603871 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.603935 kubelet[3438]: W0130 13:49:31.603882 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.603935 kubelet[3438]: E0130 13:49:31.603894 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.604175 kubelet[3438]: E0130 13:49:31.604158 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.604175 kubelet[3438]: W0130 13:49:31.604171 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.604276 kubelet[3438]: E0130 13:49:31.604185 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.604426 kubelet[3438]: E0130 13:49:31.604410 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.604498 kubelet[3438]: W0130 13:49:31.604431 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.604498 kubelet[3438]: E0130 13:49:31.604445 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.604674 kubelet[3438]: E0130 13:49:31.604643 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.604674 kubelet[3438]: W0130 13:49:31.604653 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.604674 kubelet[3438]: E0130 13:49:31.604666 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.604892 kubelet[3438]: E0130 13:49:31.604876 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.604892 kubelet[3438]: W0130 13:49:31.604889 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.605060 kubelet[3438]: E0130 13:49:31.604903 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.605166 kubelet[3438]: E0130 13:49:31.605152 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.605166 kubelet[3438]: W0130 13:49:31.605164 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.605321 kubelet[3438]: E0130 13:49:31.605178 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.605417 kubelet[3438]: E0130 13:49:31.605399 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.605417 kubelet[3438]: W0130 13:49:31.605412 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.605548 kubelet[3438]: E0130 13:49:31.605425 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.605634 kubelet[3438]: E0130 13:49:31.605620 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.605697 kubelet[3438]: W0130 13:49:31.605633 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.605697 kubelet[3438]: E0130 13:49:31.605646 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.609925 kubelet[3438]: E0130 13:49:31.609907 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.609925 kubelet[3438]: W0130 13:49:31.609921 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.610117 kubelet[3438]: E0130 13:49:31.609936 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.610249 kubelet[3438]: E0130 13:49:31.610233 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.610249 kubelet[3438]: W0130 13:49:31.610246 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.610358 kubelet[3438]: E0130 13:49:31.610265 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.610528 kubelet[3438]: E0130 13:49:31.610513 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.610528 kubelet[3438]: W0130 13:49:31.610526 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.610692 kubelet[3438]: E0130 13:49:31.610544 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.610797 kubelet[3438]: E0130 13:49:31.610784 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.610797 kubelet[3438]: W0130 13:49:31.610796 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.610890 kubelet[3438]: E0130 13:49:31.610822 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.611066 kubelet[3438]: E0130 13:49:31.611050 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.611066 kubelet[3438]: W0130 13:49:31.611063 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.611186 kubelet[3438]: E0130 13:49:31.611089 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.611313 kubelet[3438]: E0130 13:49:31.611297 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.611313 kubelet[3438]: W0130 13:49:31.611310 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.611430 kubelet[3438]: E0130 13:49:31.611327 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.611572 kubelet[3438]: E0130 13:49:31.611557 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.611572 kubelet[3438]: W0130 13:49:31.611570 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.611755 kubelet[3438]: E0130 13:49:31.611625 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.611998 kubelet[3438]: E0130 13:49:31.611835 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.611998 kubelet[3438]: W0130 13:49:31.611847 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.612094 kubelet[3438]: E0130 13:49:31.612025 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.612195 kubelet[3438]: E0130 13:49:31.612180 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.612195 kubelet[3438]: W0130 13:49:31.612192 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.612358 kubelet[3438]: E0130 13:49:31.612212 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.612464 kubelet[3438]: E0130 13:49:31.612450 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.612464 kubelet[3438]: W0130 13:49:31.612462 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.612556 kubelet[3438]: E0130 13:49:31.612480 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.612702 kubelet[3438]: E0130 13:49:31.612687 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.612702 kubelet[3438]: W0130 13:49:31.612699 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.612971 kubelet[3438]: E0130 13:49:31.612718 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.613035 kubelet[3438]: E0130 13:49:31.612969 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.613035 kubelet[3438]: W0130 13:49:31.612981 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.613035 kubelet[3438]: E0130 13:49:31.613000 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.613463 kubelet[3438]: E0130 13:49:31.613394 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.613463 kubelet[3438]: W0130 13:49:31.613409 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.613463 kubelet[3438]: E0130 13:49:31.613423 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.613731 kubelet[3438]: E0130 13:49:31.613605 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.613731 kubelet[3438]: W0130 13:49:31.613617 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.613731 kubelet[3438]: E0130 13:49:31.613637 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.613957 kubelet[3438]: E0130 13:49:31.613931 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.613957 kubelet[3438]: W0130 13:49:31.613960 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.614155 kubelet[3438]: E0130 13:49:31.613977 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.614203 kubelet[3438]: E0130 13:49:31.614164 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.614203 kubelet[3438]: W0130 13:49:31.614175 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.614203 kubelet[3438]: E0130 13:49:31.614187 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.614414 kubelet[3438]: E0130 13:49:31.614398 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.614414 kubelet[3438]: W0130 13:49:31.614410 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.614523 kubelet[3438]: E0130 13:49:31.614424 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:31.614783 kubelet[3438]: E0130 13:49:31.614767 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:31.614783 kubelet[3438]: W0130 13:49:31.614779 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:31.614861 kubelet[3438]: E0130 13:49:31.614793 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.253328 systemd[1]: run-containerd-runc-k8s.io-c5176efc7576288ea73031c4532337551c2ad29d93e530ee100cc67ae6bd7779-runc.gjO1M6.mount: Deactivated successfully. Jan 30 13:49:32.484513 kubelet[3438]: E0130 13:49:32.484190 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:32.561822 kubelet[3438]: I0130 13:49:32.561700 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.612249 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.613982 kubelet[3438]: W0130 13:49:32.612279 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.612309 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.612636 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.613982 kubelet[3438]: W0130 13:49:32.612651 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.612771 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.613075 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.613982 kubelet[3438]: W0130 13:49:32.613086 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.613100 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.613982 kubelet[3438]: E0130 13:49:32.613309 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.614943 kubelet[3438]: W0130 13:49:32.613318 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.614943 kubelet[3438]: E0130 13:49:32.613328 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.614943 kubelet[3438]: E0130 13:49:32.613541 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.614943 kubelet[3438]: W0130 13:49:32.613550 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.614943 kubelet[3438]: E0130 13:49:32.613560 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.614943 kubelet[3438]: E0130 13:49:32.613739 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.614943 kubelet[3438]: W0130 13:49:32.613758 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.614943 kubelet[3438]: E0130 13:49:32.613767 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.614943 kubelet[3438]: E0130 13:49:32.613955 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.614943 kubelet[3438]: W0130 13:49:32.613964 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.613975 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.614191 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.615432 kubelet[3438]: W0130 13:49:32.614216 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.614242 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.614491 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.615432 kubelet[3438]: W0130 13:49:32.614503 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.614515 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.614725 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.615432 kubelet[3438]: W0130 13:49:32.614735 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615432 kubelet[3438]: E0130 13:49:32.614767 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615005 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.615847 kubelet[3438]: W0130 13:49:32.615016 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615030 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615259 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.615847 kubelet[3438]: W0130 13:49:32.615270 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615282 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615513 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.615847 kubelet[3438]: W0130 13:49:32.615524 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615535 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.615847 kubelet[3438]: E0130 13:49:32.615719 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.617595 kubelet[3438]: W0130 13:49:32.615727 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.617595 kubelet[3438]: E0130 13:49:32.615738 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.617595 kubelet[3438]: E0130 13:49:32.615935 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.617595 kubelet[3438]: W0130 13:49:32.615970 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.617595 kubelet[3438]: E0130 13:49:32.615984 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.617595 kubelet[3438]: E0130 13:49:32.616287 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.617595 kubelet[3438]: W0130 13:49:32.616298 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.617595 kubelet[3438]: E0130 13:49:32.616308 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.617595 kubelet[3438]: E0130 13:49:32.616550 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.617595 kubelet[3438]: W0130 13:49:32.616558 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.616577 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.616791 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.618040 kubelet[3438]: W0130 13:49:32.616799 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.616817 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.617051 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.618040 kubelet[3438]: W0130 13:49:32.617061 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.617081 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.617276 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.618040 kubelet[3438]: W0130 13:49:32.617286 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.618040 kubelet[3438]: E0130 13:49:32.617308 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.617506 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619230 kubelet[3438]: W0130 13:49:32.617516 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.617543 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.617775 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619230 kubelet[3438]: W0130 13:49:32.617785 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.617884 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.618278 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619230 kubelet[3438]: W0130 13:49:32.618288 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.618367 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619230 kubelet[3438]: E0130 13:49:32.618481 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619673 kubelet[3438]: W0130 13:49:32.618490 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.619673 kubelet[3438]: E0130 13:49:32.618556 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619673 kubelet[3438]: E0130 13:49:32.618675 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619673 kubelet[3438]: W0130 13:49:32.618681 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.619673 kubelet[3438]: E0130 13:49:32.618702 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619673 kubelet[3438]: E0130 13:49:32.618887 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619673 kubelet[3438]: W0130 13:49:32.618895 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.619673 kubelet[3438]: E0130 13:49:32.618911 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.619673 kubelet[3438]: E0130 13:49:32.619245 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.619673 kubelet[3438]: W0130 13:49:32.619256 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.619281 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.619498 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.621360 kubelet[3438]: W0130 13:49:32.619508 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.619532 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.619886 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.621360 kubelet[3438]: W0130 13:49:32.619896 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.619911 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.620105 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.621360 kubelet[3438]: W0130 13:49:32.620113 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621360 kubelet[3438]: E0130 13:49:32.620134 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.621749 kubelet[3438]: E0130 13:49:32.620328 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.621749 kubelet[3438]: W0130 13:49:32.620336 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621749 kubelet[3438]: E0130 13:49:32.620358 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.621749 kubelet[3438]: E0130 13:49:32.620897 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.621749 kubelet[3438]: W0130 13:49:32.620908 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621749 kubelet[3438]: E0130 13:49:32.620922 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.621749 kubelet[3438]: E0130 13:49:32.621119 3438 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:49:32.621749 kubelet[3438]: W0130 13:49:32.621127 3438 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:49:32.621749 kubelet[3438]: E0130 13:49:32.621138 3438 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:49:32.773763 containerd[1785]: time="2025-01-30T13:49:32.773713006Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:32.777785 containerd[1785]: time="2025-01-30T13:49:32.777703702Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Jan 30 13:49:32.788729 containerd[1785]: time="2025-01-30T13:49:32.788665864Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:32.796008 containerd[1785]: time="2025-01-30T13:49:32.795667731Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:32.797326 containerd[1785]: time="2025-01-30T13:49:32.796921661Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.549260416s" Jan 30 13:49:32.797326 containerd[1785]: time="2025-01-30T13:49:32.796987662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:49:32.800190 containerd[1785]: time="2025-01-30T13:49:32.800135638Z" level=info msg="CreateContainer within sandbox \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:49:32.863918 containerd[1785]: time="2025-01-30T13:49:32.863807059Z" level=info msg="CreateContainer within sandbox \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"44d93aa143713905798cd92589d3bd9fbb64d985259c1256eb81141734127f80\"" Jan 30 13:49:32.865696 containerd[1785]: time="2025-01-30T13:49:32.864313871Z" level=info msg="StartContainer for \"44d93aa143713905798cd92589d3bd9fbb64d985259c1256eb81141734127f80\"" Jan 30 13:49:32.932168 containerd[1785]: time="2025-01-30T13:49:32.931825084Z" level=info msg="StartContainer for \"44d93aa143713905798cd92589d3bd9fbb64d985259c1256eb81141734127f80\" returns successfully" Jan 30 13:49:33.253411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-44d93aa143713905798cd92589d3bd9fbb64d985259c1256eb81141734127f80-rootfs.mount: Deactivated successfully. Jan 30 13:49:34.261627 containerd[1785]: time="2025-01-30T13:49:34.261540754Z" level=info msg="shim disconnected" id=44d93aa143713905798cd92589d3bd9fbb64d985259c1256eb81141734127f80 namespace=k8s.io Jan 30 13:49:34.261627 containerd[1785]: time="2025-01-30T13:49:34.261619556Z" level=warning msg="cleaning up after shim disconnected" id=44d93aa143713905798cd92589d3bd9fbb64d985259c1256eb81141734127f80 namespace=k8s.io Jan 30 13:49:34.261627 containerd[1785]: time="2025-01-30T13:49:34.261631456Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:49:34.484628 kubelet[3438]: E0130 13:49:34.484563 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:34.571340 containerd[1785]: time="2025-01-30T13:49:34.571176952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:49:36.484329 kubelet[3438]: E0130 13:49:36.484190 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:38.484267 kubelet[3438]: E0130 13:49:38.484215 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:39.935518 containerd[1785]: time="2025-01-30T13:49:39.935466794Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:39.938183 containerd[1785]: time="2025-01-30T13:49:39.938114859Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:49:39.944070 containerd[1785]: time="2025-01-30T13:49:39.944018702Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:39.948811 containerd[1785]: time="2025-01-30T13:49:39.948760317Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:39.949576 containerd[1785]: time="2025-01-30T13:49:39.949432833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.37819638s" Jan 30 13:49:39.949576 containerd[1785]: time="2025-01-30T13:49:39.949470634Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:49:39.952205 containerd[1785]: time="2025-01-30T13:49:39.952090197Z" level=info msg="CreateContainer within sandbox \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:49:39.998926 containerd[1785]: time="2025-01-30T13:49:39.998875032Z" level=info msg="CreateContainer within sandbox \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"83c009e6166127fe43bb767b019e94f13d399e48e12070b45eb3c074771da2b4\"" Jan 30 13:49:40.000236 containerd[1785]: time="2025-01-30T13:49:39.999514547Z" level=info msg="StartContainer for \"83c009e6166127fe43bb767b019e94f13d399e48e12070b45eb3c074771da2b4\"" Jan 30 13:49:40.063606 containerd[1785]: time="2025-01-30T13:49:40.062879284Z" level=info msg="StartContainer for \"83c009e6166127fe43bb767b019e94f13d399e48e12070b45eb3c074771da2b4\" returns successfully" Jan 30 13:49:40.485486 kubelet[3438]: E0130 13:49:40.485413 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:41.475648 kubelet[3438]: I0130 13:49:41.475608 3438 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:49:41.501425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-83c009e6166127fe43bb767b019e94f13d399e48e12070b45eb3c074771da2b4-rootfs.mount: Deactivated successfully. Jan 30 13:49:41.520938 kubelet[3438]: I0130 13:49:41.519567 3438 topology_manager.go:215] "Topology Admit Handler" podUID="4977d573-54f2-415d-abc0-e669e05a5801" podNamespace="kube-system" podName="coredns-7db6d8ff4d-6fc58" Jan 30 13:49:41.528968 kubelet[3438]: I0130 13:49:41.528343 3438 topology_manager.go:215] "Topology Admit Handler" podUID="c8a9341d-4cd7-4b79-b7a9-9f342499286e" podNamespace="kube-system" podName="coredns-7db6d8ff4d-j46xv" Jan 30 13:49:41.528968 kubelet[3438]: I0130 13:49:41.528584 3438 topology_manager.go:215] "Topology Admit Handler" podUID="3ddfe995-7c01-4e32-8dca-763b123eb964" podNamespace="calico-system" podName="calico-kube-controllers-5d9586bbdc-85zhb" Jan 30 13:49:41.534444 kubelet[3438]: I0130 13:49:41.534422 3438 topology_manager.go:215] "Topology Admit Handler" podUID="3c02d11f-d543-425f-a360-1b4417202889" podNamespace="calico-apiserver" podName="calico-apiserver-5d7b4b4c89-d7hdd" Jan 30 13:49:41.534716 kubelet[3438]: I0130 13:49:41.534698 3438 topology_manager.go:215] "Topology Admit Handler" podUID="385f3a3c-e141-44cf-93e1-7d18099ed6fc" podNamespace="calico-apiserver" podName="calico-apiserver-5d7b4b4c89-vq9jp" Jan 30 13:49:41.679795 kubelet[3438]: I0130 13:49:41.679450 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8a9341d-4cd7-4b79-b7a9-9f342499286e-config-volume\") pod \"coredns-7db6d8ff4d-j46xv\" (UID: \"c8a9341d-4cd7-4b79-b7a9-9f342499286e\") " pod="kube-system/coredns-7db6d8ff4d-j46xv" Jan 30 13:49:41.679795 kubelet[3438]: I0130 13:49:41.679511 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3c02d11f-d543-425f-a360-1b4417202889-calico-apiserver-certs\") pod \"calico-apiserver-5d7b4b4c89-d7hdd\" (UID: \"3c02d11f-d543-425f-a360-1b4417202889\") " pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" Jan 30 13:49:41.679795 kubelet[3438]: I0130 13:49:41.679547 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7v8k2\" (UniqueName: \"kubernetes.io/projected/3c02d11f-d543-425f-a360-1b4417202889-kube-api-access-7v8k2\") pod \"calico-apiserver-5d7b4b4c89-d7hdd\" (UID: \"3c02d11f-d543-425f-a360-1b4417202889\") " pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" Jan 30 13:49:41.679795 kubelet[3438]: I0130 13:49:41.679579 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3ddfe995-7c01-4e32-8dca-763b123eb964-tigera-ca-bundle\") pod \"calico-kube-controllers-5d9586bbdc-85zhb\" (UID: \"3ddfe995-7c01-4e32-8dca-763b123eb964\") " pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" Jan 30 13:49:41.679795 kubelet[3438]: I0130 13:49:41.679645 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwr5b\" (UniqueName: \"kubernetes.io/projected/c8a9341d-4cd7-4b79-b7a9-9f342499286e-kube-api-access-dwr5b\") pod \"coredns-7db6d8ff4d-j46xv\" (UID: \"c8a9341d-4cd7-4b79-b7a9-9f342499286e\") " pod="kube-system/coredns-7db6d8ff4d-j46xv" Jan 30 13:49:41.680286 kubelet[3438]: I0130 13:49:41.679694 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4977d573-54f2-415d-abc0-e669e05a5801-config-volume\") pod \"coredns-7db6d8ff4d-6fc58\" (UID: \"4977d573-54f2-415d-abc0-e669e05a5801\") " pod="kube-system/coredns-7db6d8ff4d-6fc58" Jan 30 13:49:41.680286 kubelet[3438]: I0130 13:49:41.679725 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8cwg\" (UniqueName: \"kubernetes.io/projected/385f3a3c-e141-44cf-93e1-7d18099ed6fc-kube-api-access-x8cwg\") pod \"calico-apiserver-5d7b4b4c89-vq9jp\" (UID: \"385f3a3c-e141-44cf-93e1-7d18099ed6fc\") " pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" Jan 30 13:49:41.680286 kubelet[3438]: I0130 13:49:41.679751 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpmqp\" (UniqueName: \"kubernetes.io/projected/3ddfe995-7c01-4e32-8dca-763b123eb964-kube-api-access-hpmqp\") pod \"calico-kube-controllers-5d9586bbdc-85zhb\" (UID: \"3ddfe995-7c01-4e32-8dca-763b123eb964\") " pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" Jan 30 13:49:41.680286 kubelet[3438]: I0130 13:49:41.679788 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gcnr\" (UniqueName: \"kubernetes.io/projected/4977d573-54f2-415d-abc0-e669e05a5801-kube-api-access-2gcnr\") pod \"coredns-7db6d8ff4d-6fc58\" (UID: \"4977d573-54f2-415d-abc0-e669e05a5801\") " pod="kube-system/coredns-7db6d8ff4d-6fc58" Jan 30 13:49:41.680286 kubelet[3438]: I0130 13:49:41.679818 3438 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/385f3a3c-e141-44cf-93e1-7d18099ed6fc-calico-apiserver-certs\") pod \"calico-apiserver-5d7b4b4c89-vq9jp\" (UID: \"385f3a3c-e141-44cf-93e1-7d18099ed6fc\") " pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" Jan 30 13:49:41.833737 containerd[1785]: time="2025-01-30T13:49:41.833585017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fc58,Uid:4977d573-54f2-415d-abc0-e669e05a5801,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:41.840396 containerd[1785]: time="2025-01-30T13:49:41.840349181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9586bbdc-85zhb,Uid:3ddfe995-7c01-4e32-8dca-763b123eb964,Namespace:calico-system,Attempt:0,}" Jan 30 13:49:41.844013 containerd[1785]: time="2025-01-30T13:49:41.843983669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-vq9jp,Uid:385f3a3c-e141-44cf-93e1-7d18099ed6fc,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:49:41.848497 containerd[1785]: time="2025-01-30T13:49:41.848466178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-d7hdd,Uid:3c02d11f-d543-425f-a360-1b4417202889,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:49:41.853002 containerd[1785]: time="2025-01-30T13:49:41.852975387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46xv,Uid:c8a9341d-4cd7-4b79-b7a9-9f342499286e,Namespace:kube-system,Attempt:0,}" Jan 30 13:49:43.127971 containerd[1785]: time="2025-01-30T13:49:43.127663594Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbr7p,Uid:47432003-ec2a-4f52-b92d-2b12925f250f,Namespace:calico-system,Attempt:0,}" Jan 30 13:49:43.145733 containerd[1785]: time="2025-01-30T13:49:43.145671731Z" level=info msg="shim disconnected" id=83c009e6166127fe43bb767b019e94f13d399e48e12070b45eb3c074771da2b4 namespace=k8s.io Jan 30 13:49:43.145733 containerd[1785]: time="2025-01-30T13:49:43.145724832Z" level=warning msg="cleaning up after shim disconnected" id=83c009e6166127fe43bb767b019e94f13d399e48e12070b45eb3c074771da2b4 namespace=k8s.io Jan 30 13:49:43.145733 containerd[1785]: time="2025-01-30T13:49:43.145736032Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:49:43.481883 containerd[1785]: time="2025-01-30T13:49:43.481826081Z" level=error msg="Failed to destroy network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.482894 containerd[1785]: time="2025-01-30T13:49:43.482852406Z" level=error msg="encountered an error cleaning up failed sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.484014 containerd[1785]: time="2025-01-30T13:49:43.483548323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46xv,Uid:c8a9341d-4cd7-4b79-b7a9-9f342499286e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.486726 kubelet[3438]: E0130 13:49:43.486679 3438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.487401 kubelet[3438]: E0130 13:49:43.487369 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46xv" Jan 30 13:49:43.487524 kubelet[3438]: E0130 13:49:43.487503 3438 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-j46xv" Jan 30 13:49:43.488001 kubelet[3438]: E0130 13:49:43.487641 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-j46xv_kube-system(c8a9341d-4cd7-4b79-b7a9-9f342499286e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-j46xv_kube-system(c8a9341d-4cd7-4b79-b7a9-9f342499286e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46xv" podUID="c8a9341d-4cd7-4b79-b7a9-9f342499286e" Jan 30 13:49:43.588805 containerd[1785]: time="2025-01-30T13:49:43.588747274Z" level=error msg="Failed to destroy network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.592263 containerd[1785]: time="2025-01-30T13:49:43.592149356Z" level=error msg="Failed to destroy network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.593940 containerd[1785]: time="2025-01-30T13:49:43.593070279Z" level=error msg="encountered an error cleaning up failed sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.594180 containerd[1785]: time="2025-01-30T13:49:43.594136605Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9586bbdc-85zhb,Uid:3ddfe995-7c01-4e32-8dca-763b123eb964,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.595825 kubelet[3438]: E0130 13:49:43.594451 3438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.595825 kubelet[3438]: E0130 13:49:43.594503 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" Jan 30 13:49:43.595825 kubelet[3438]: E0130 13:49:43.594527 3438 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" Jan 30 13:49:43.596082 kubelet[3438]: E0130 13:49:43.594577 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5d9586bbdc-85zhb_calico-system(3ddfe995-7c01-4e32-8dca-763b123eb964)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5d9586bbdc-85zhb_calico-system(3ddfe995-7c01-4e32-8dca-763b123eb964)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" podUID="3ddfe995-7c01-4e32-8dca-763b123eb964" Jan 30 13:49:43.597326 containerd[1785]: time="2025-01-30T13:49:43.597288681Z" level=error msg="Failed to destroy network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.598920 containerd[1785]: time="2025-01-30T13:49:43.598864719Z" level=error msg="encountered an error cleaning up failed sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.599091 containerd[1785]: time="2025-01-30T13:49:43.599060924Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-vq9jp,Uid:385f3a3c-e141-44cf-93e1-7d18099ed6fc,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.599460 kubelet[3438]: E0130 13:49:43.599331 3438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.599460 kubelet[3438]: E0130 13:49:43.599381 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" Jan 30 13:49:43.599460 kubelet[3438]: E0130 13:49:43.599408 3438 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" Jan 30 13:49:43.600130 containerd[1785]: time="2025-01-30T13:49:43.599990446Z" level=error msg="encountered an error cleaning up failed sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.600209 kubelet[3438]: E0130 13:49:43.600075 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7b4b4c89-vq9jp_calico-apiserver(385f3a3c-e141-44cf-93e1-7d18099ed6fc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7b4b4c89-vq9jp_calico-apiserver(385f3a3c-e141-44cf-93e1-7d18099ed6fc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" podUID="385f3a3c-e141-44cf-93e1-7d18099ed6fc" Jan 30 13:49:43.600564 containerd[1785]: time="2025-01-30T13:49:43.600320354Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fc58,Uid:4977d573-54f2-415d-abc0-e669e05a5801,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.601788 kubelet[3438]: E0130 13:49:43.601746 3438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.602216 containerd[1785]: time="2025-01-30T13:49:43.601764289Z" level=error msg="Failed to destroy network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.602216 containerd[1785]: time="2025-01-30T13:49:43.602101098Z" level=error msg="encountered an error cleaning up failed sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.602216 containerd[1785]: time="2025-01-30T13:49:43.602150199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbr7p,Uid:47432003-ec2a-4f52-b92d-2b12925f250f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.603258 kubelet[3438]: E0130 13:49:43.602306 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6fc58" Jan 30 13:49:43.603258 kubelet[3438]: E0130 13:49:43.602610 3438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.603258 kubelet[3438]: E0130 13:49:43.602649 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:43.603258 kubelet[3438]: E0130 13:49:43.602668 3438 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-pbr7p" Jan 30 13:49:43.603488 kubelet[3438]: E0130 13:49:43.602726 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-pbr7p_calico-system(47432003-ec2a-4f52-b92d-2b12925f250f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-pbr7p_calico-system(47432003-ec2a-4f52-b92d-2b12925f250f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:43.605459 kubelet[3438]: E0130 13:49:43.604804 3438 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-6fc58" Jan 30 13:49:43.605459 kubelet[3438]: E0130 13:49:43.605016 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-6fc58_kube-system(4977d573-54f2-415d-abc0-e669e05a5801)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-6fc58_kube-system(4977d573-54f2-415d-abc0-e669e05a5801)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6fc58" podUID="4977d573-54f2-415d-abc0-e669e05a5801" Jan 30 13:49:43.606403 kubelet[3438]: I0130 13:49:43.606385 3438 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:49:43.609422 containerd[1785]: time="2025-01-30T13:49:43.609156469Z" level=info msg="StopPodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\"" Jan 30 13:49:43.609422 containerd[1785]: time="2025-01-30T13:49:43.609364274Z" level=info msg="Ensure that sandbox f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db in task-service has been cleanup successfully" Jan 30 13:49:43.613030 containerd[1785]: time="2025-01-30T13:49:43.612152441Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:49:43.615569 containerd[1785]: time="2025-01-30T13:49:43.615452821Z" level=error msg="Failed to destroy network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.616365 containerd[1785]: time="2025-01-30T13:49:43.616218840Z" level=error msg="encountered an error cleaning up failed sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.616365 containerd[1785]: time="2025-01-30T13:49:43.616277141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-d7hdd,Uid:3c02d11f-d543-425f-a360-1b4417202889,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.617926 kubelet[3438]: E0130 13:49:43.616619 3438 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.617926 kubelet[3438]: E0130 13:49:43.616667 3438 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" Jan 30 13:49:43.617926 kubelet[3438]: E0130 13:49:43.616691 3438 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" Jan 30 13:49:43.618171 kubelet[3438]: E0130 13:49:43.616737 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5d7b4b4c89-d7hdd_calico-apiserver(3c02d11f-d543-425f-a360-1b4417202889)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5d7b4b4c89-d7hdd_calico-apiserver(3c02d11f-d543-425f-a360-1b4417202889)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" podUID="3c02d11f-d543-425f-a360-1b4417202889" Jan 30 13:49:43.652527 containerd[1785]: time="2025-01-30T13:49:43.652469019Z" level=error msg="StopPodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" failed" error="failed to destroy network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:43.652811 kubelet[3438]: E0130 13:49:43.652767 3438 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:49:43.652927 kubelet[3438]: E0130 13:49:43.652831 3438 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db"} Jan 30 13:49:43.652927 kubelet[3438]: E0130 13:49:43.652915 3438 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c8a9341d-4cd7-4b79-b7a9-9f342499286e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:43.653072 kubelet[3438]: E0130 13:49:43.652962 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c8a9341d-4cd7-4b79-b7a9-9f342499286e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-j46xv" podUID="c8a9341d-4cd7-4b79-b7a9-9f342499286e" Jan 30 13:49:43.807255 kubelet[3438]: I0130 13:49:43.806081 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:49:44.305328 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff-shm.mount: Deactivated successfully. Jan 30 13:49:44.306005 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd-shm.mount: Deactivated successfully. Jan 30 13:49:44.306298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db-shm.mount: Deactivated successfully. Jan 30 13:49:44.610228 kubelet[3438]: I0130 13:49:44.610079 3438 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:49:44.612451 containerd[1785]: time="2025-01-30T13:49:44.611524573Z" level=info msg="StopPodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\"" Jan 30 13:49:44.612451 containerd[1785]: time="2025-01-30T13:49:44.611968483Z" level=info msg="Ensure that sandbox dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03 in task-service has been cleanup successfully" Jan 30 13:49:44.614284 kubelet[3438]: I0130 13:49:44.613570 3438 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:49:44.614401 containerd[1785]: time="2025-01-30T13:49:44.614233138Z" level=info msg="StopPodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\"" Jan 30 13:49:44.614735 containerd[1785]: time="2025-01-30T13:49:44.614684249Z" level=info msg="Ensure that sandbox 815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914 in task-service has been cleanup successfully" Jan 30 13:49:44.617972 kubelet[3438]: I0130 13:49:44.617068 3438 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:49:44.618077 containerd[1785]: time="2025-01-30T13:49:44.617623721Z" level=info msg="StopPodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\"" Jan 30 13:49:44.618077 containerd[1785]: time="2025-01-30T13:49:44.617860626Z" level=info msg="Ensure that sandbox df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd in task-service has been cleanup successfully" Jan 30 13:49:44.620568 kubelet[3438]: I0130 13:49:44.620549 3438 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:49:44.621370 containerd[1785]: time="2025-01-30T13:49:44.621344811Z" level=info msg="StopPodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\"" Jan 30 13:49:44.621861 containerd[1785]: time="2025-01-30T13:49:44.621836123Z" level=info msg="Ensure that sandbox 98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2 in task-service has been cleanup successfully" Jan 30 13:49:44.624733 kubelet[3438]: I0130 13:49:44.624716 3438 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:49:44.626501 containerd[1785]: time="2025-01-30T13:49:44.626475835Z" level=info msg="StopPodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\"" Jan 30 13:49:44.626773 containerd[1785]: time="2025-01-30T13:49:44.626747942Z" level=info msg="Ensure that sandbox 91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff in task-service has been cleanup successfully" Jan 30 13:49:44.706713 containerd[1785]: time="2025-01-30T13:49:44.706651479Z" level=error msg="StopPodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" failed" error="failed to destroy network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:44.707669 containerd[1785]: time="2025-01-30T13:49:44.706898385Z" level=error msg="StopPodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" failed" error="failed to destroy network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:44.707787 kubelet[3438]: E0130 13:49:44.707210 3438 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:49:44.707787 kubelet[3438]: E0130 13:49:44.707258 3438 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd"} Jan 30 13:49:44.707787 kubelet[3438]: E0130 13:49:44.707303 3438 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ddfe995-7c01-4e32-8dca-763b123eb964\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:44.707787 kubelet[3438]: E0130 13:49:44.707337 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ddfe995-7c01-4e32-8dca-763b123eb964\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" podUID="3ddfe995-7c01-4e32-8dca-763b123eb964" Jan 30 13:49:44.708525 kubelet[3438]: E0130 13:49:44.707378 3438 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:49:44.708525 kubelet[3438]: E0130 13:49:44.707402 3438 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2"} Jan 30 13:49:44.708525 kubelet[3438]: E0130 13:49:44.707428 3438 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"47432003-ec2a-4f52-b92d-2b12925f250f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:44.708525 kubelet[3438]: E0130 13:49:44.707451 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"47432003-ec2a-4f52-b92d-2b12925f250f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-pbr7p" podUID="47432003-ec2a-4f52-b92d-2b12925f250f" Jan 30 13:49:44.720156 containerd[1785]: time="2025-01-30T13:49:44.720093605Z" level=error msg="StopPodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" failed" error="failed to destroy network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:44.720985 kubelet[3438]: E0130 13:49:44.720760 3438 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:49:44.720985 kubelet[3438]: E0130 13:49:44.720832 3438 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03"} Jan 30 13:49:44.720985 kubelet[3438]: E0130 13:49:44.720880 3438 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3c02d11f-d543-425f-a360-1b4417202889\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:44.720985 kubelet[3438]: E0130 13:49:44.720915 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3c02d11f-d543-425f-a360-1b4417202889\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" podUID="3c02d11f-d543-425f-a360-1b4417202889" Jan 30 13:49:44.726196 containerd[1785]: time="2025-01-30T13:49:44.725756042Z" level=error msg="StopPodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" failed" error="failed to destroy network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:44.726305 kubelet[3438]: E0130 13:49:44.726004 3438 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:49:44.726305 kubelet[3438]: E0130 13:49:44.726051 3438 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914"} Jan 30 13:49:44.726305 kubelet[3438]: E0130 13:49:44.726093 3438 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"385f3a3c-e141-44cf-93e1-7d18099ed6fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:44.726305 kubelet[3438]: E0130 13:49:44.726127 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"385f3a3c-e141-44cf-93e1-7d18099ed6fc\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" podUID="385f3a3c-e141-44cf-93e1-7d18099ed6fc" Jan 30 13:49:44.728178 containerd[1785]: time="2025-01-30T13:49:44.728142000Z" level=error msg="StopPodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" failed" error="failed to destroy network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:49:44.728366 kubelet[3438]: E0130 13:49:44.728322 3438 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:49:44.728441 kubelet[3438]: E0130 13:49:44.728370 3438 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff"} Jan 30 13:49:44.728441 kubelet[3438]: E0130 13:49:44.728409 3438 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"4977d573-54f2-415d-abc0-e669e05a5801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:49:44.728534 kubelet[3438]: E0130 13:49:44.728438 3438 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"4977d573-54f2-415d-abc0-e669e05a5801\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-6fc58" podUID="4977d573-54f2-415d-abc0-e669e05a5801" Jan 30 13:49:51.553246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1837466320.mount: Deactivated successfully. Jan 30 13:49:51.598501 containerd[1785]: time="2025-01-30T13:49:51.598443147Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:51.608552 containerd[1785]: time="2025-01-30T13:49:51.608464885Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:49:51.611072 containerd[1785]: time="2025-01-30T13:49:51.611013345Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:51.620522 containerd[1785]: time="2025-01-30T13:49:51.620473970Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:51.621511 containerd[1785]: time="2025-01-30T13:49:51.621075884Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.008883542s" Jan 30 13:49:51.621511 containerd[1785]: time="2025-01-30T13:49:51.621119585Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:49:51.630918 containerd[1785]: time="2025-01-30T13:49:51.630889417Z" level=info msg="CreateContainer within sandbox \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:49:51.703360 containerd[1785]: time="2025-01-30T13:49:51.703304736Z" level=info msg="CreateContainer within sandbox \"cb7ccd5c0ee5e6851a619c532d52a3b8ba7e3d3b770caf405fb1734bef9fb2a0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5d0e908e5a3ed1940c4259c85b87a5fe016b250c61b7af7b5fe498c75f61cae6\"" Jan 30 13:49:51.704759 containerd[1785]: time="2025-01-30T13:49:51.704067054Z" level=info msg="StartContainer for \"5d0e908e5a3ed1940c4259c85b87a5fe016b250c61b7af7b5fe498c75f61cae6\"" Jan 30 13:49:51.767916 containerd[1785]: time="2025-01-30T13:49:51.767855669Z" level=info msg="StartContainer for \"5d0e908e5a3ed1940c4259c85b87a5fe016b250c61b7af7b5fe498c75f61cae6\" returns successfully" Jan 30 13:49:51.993413 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:49:51.993580 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:49:52.673518 kubelet[3438]: I0130 13:49:52.673442 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zj249" podStartSLOduration=1.894821099 podStartE2EDuration="24.673421267s" podCreationTimestamp="2025-01-30 13:49:28 +0000 UTC" firstStartedPulling="2025-01-30 13:49:28.843318636 +0000 UTC m=+23.893586778" lastFinishedPulling="2025-01-30 13:49:51.621918904 +0000 UTC m=+46.672186946" observedRunningTime="2025-01-30 13:49:52.671575023 +0000 UTC m=+47.721843065" watchObservedRunningTime="2025-01-30 13:49:52.673421267 +0000 UTC m=+47.723689309" Jan 30 13:49:53.637985 kernel: bpftool[4710]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:49:53.711459 systemd[1]: run-containerd-runc-k8s.io-5d0e908e5a3ed1940c4259c85b87a5fe016b250c61b7af7b5fe498c75f61cae6-runc.cmaw3Y.mount: Deactivated successfully. Jan 30 13:49:53.975068 systemd-networkd[1361]: vxlan.calico: Link UP Jan 30 13:49:53.975081 systemd-networkd[1361]: vxlan.calico: Gained carrier Jan 30 13:49:55.327117 systemd-networkd[1361]: vxlan.calico: Gained IPv6LL Jan 30 13:49:55.487992 containerd[1785]: time="2025-01-30T13:49:55.486463162Z" level=info msg="StopPodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\"" Jan 30 13:49:55.487992 containerd[1785]: time="2025-01-30T13:49:55.486788770Z" level=info msg="StopPodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\"" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.554 [INFO][4828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.555 [INFO][4828] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" iface="eth0" netns="/var/run/netns/cni-5167fe25-41f0-9f87-bfc5-0c9b81df65d1" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.555 [INFO][4828] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" iface="eth0" netns="/var/run/netns/cni-5167fe25-41f0-9f87-bfc5-0c9b81df65d1" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.555 [INFO][4828] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" iface="eth0" netns="/var/run/netns/cni-5167fe25-41f0-9f87-bfc5-0c9b81df65d1" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.555 [INFO][4828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.555 [INFO][4828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.588 [INFO][4844] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.589 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.589 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.596 [WARNING][4844] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.596 [INFO][4844] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.598 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:55.603332 containerd[1785]: 2025-01-30 13:49:55.601 [INFO][4828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:49:55.605248 containerd[1785]: time="2025-01-30T13:49:55.605106167Z" level=info msg="TearDown network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" successfully" Jan 30 13:49:55.605248 containerd[1785]: time="2025-01-30T13:49:55.605167068Z" level=info msg="StopPodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" returns successfully" Jan 30 13:49:55.609261 systemd[1]: run-netns-cni\x2d5167fe25\x2d41f0\x2d9f87\x2dbfc5\x2d0c9b81df65d1.mount: Deactivated successfully. Jan 30 13:49:55.611389 containerd[1785]: time="2025-01-30T13:49:55.610551596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9586bbdc-85zhb,Uid:3ddfe995-7c01-4e32-8dca-763b123eb964,Namespace:calico-system,Attempt:1,}" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.560 [INFO][4836] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.560 [INFO][4836] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" iface="eth0" netns="/var/run/netns/cni-2d39fd1e-041e-189f-b214-dcfd18f99df6" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.561 [INFO][4836] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" iface="eth0" netns="/var/run/netns/cni-2d39fd1e-041e-189f-b214-dcfd18f99df6" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.561 [INFO][4836] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" iface="eth0" netns="/var/run/netns/cni-2d39fd1e-041e-189f-b214-dcfd18f99df6" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.561 [INFO][4836] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.561 [INFO][4836] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.590 [INFO][4848] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.590 [INFO][4848] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.598 [INFO][4848] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.612 [WARNING][4848] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.612 [INFO][4848] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.614 [INFO][4848] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:55.616632 containerd[1785]: 2025-01-30 13:49:55.615 [INFO][4836] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:49:55.617171 containerd[1785]: time="2025-01-30T13:49:55.616790643Z" level=info msg="TearDown network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" successfully" Jan 30 13:49:55.617171 containerd[1785]: time="2025-01-30T13:49:55.616819444Z" level=info msg="StopPodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" returns successfully" Jan 30 13:49:55.617468 containerd[1785]: time="2025-01-30T13:49:55.617437259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fc58,Uid:4977d573-54f2-415d-abc0-e669e05a5801,Namespace:kube-system,Attempt:1,}" Jan 30 13:49:55.621473 systemd[1]: run-netns-cni\x2d2d39fd1e\x2d041e\x2d189f\x2db214\x2ddcfd18f99df6.mount: Deactivated successfully. Jan 30 13:49:55.826286 systemd-networkd[1361]: calib9cbecc6018: Link UP Jan 30 13:49:55.826662 systemd-networkd[1361]: calib9cbecc6018: Gained carrier Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.737 [INFO][4857] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0 calico-kube-controllers-5d9586bbdc- calico-system 3ddfe995-7c01-4e32-8dca-763b123eb964 755 0 2025-01-30 13:49:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5d9586bbdc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-95297e853e calico-kube-controllers-5d9586bbdc-85zhb eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib9cbecc6018 [] []}} ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.737 [INFO][4857] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.781 [INFO][4879] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" HandleID="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.791 [INFO][4879] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" HandleID="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003185d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-95297e853e", "pod":"calico-kube-controllers-5d9586bbdc-85zhb", "timestamp":"2025-01-30 13:49:55.781673841 +0000 UTC"}, Hostname:"ci-4081.3.0-a-95297e853e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.792 [INFO][4879] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.792 [INFO][4879] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.792 [INFO][4879] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-95297e853e' Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.793 [INFO][4879] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.797 [INFO][4879] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.800 [INFO][4879] ipam/ipam.go 489: Trying affinity for 192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.802 [INFO][4879] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.803 [INFO][4879] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.803 [INFO][4879] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.128/26 handle="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.805 [INFO][4879] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0 Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.812 [INFO][4879] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.128/26 handle="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.818 [INFO][4879] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.129/26] block=192.168.27.128/26 handle="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.818 [INFO][4879] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.129/26] handle="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.818 [INFO][4879] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:55.851978 containerd[1785]: 2025-01-30 13:49:55.818 [INFO][4879] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.129/26] IPv6=[] ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" HandleID="k8s-pod-network.0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.854148 containerd[1785]: 2025-01-30 13:49:55.822 [INFO][4857] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0", GenerateName:"calico-kube-controllers-5d9586bbdc-", Namespace:"calico-system", SelfLink:"", UID:"3ddfe995-7c01-4e32-8dca-763b123eb964", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9586bbdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"", Pod:"calico-kube-controllers-5d9586bbdc-85zhb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9cbecc6018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:55.854148 containerd[1785]: 2025-01-30 13:49:55.822 [INFO][4857] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.129/32] ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.854148 containerd[1785]: 2025-01-30 13:49:55.822 [INFO][4857] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9cbecc6018 ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.854148 containerd[1785]: 2025-01-30 13:49:55.827 [INFO][4857] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.854148 containerd[1785]: 2025-01-30 13:49:55.828 [INFO][4857] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0", GenerateName:"calico-kube-controllers-5d9586bbdc-", Namespace:"calico-system", SelfLink:"", UID:"3ddfe995-7c01-4e32-8dca-763b123eb964", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9586bbdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0", Pod:"calico-kube-controllers-5d9586bbdc-85zhb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9cbecc6018", MAC:"3e:77:b7:d8:86:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:55.854148 containerd[1785]: 2025-01-30 13:49:55.849 [INFO][4857] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0" Namespace="calico-system" Pod="calico-kube-controllers-5d9586bbdc-85zhb" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:49:55.897544 systemd-networkd[1361]: calib993b592120: Link UP Jan 30 13:49:55.898626 systemd-networkd[1361]: calib993b592120: Gained carrier Jan 30 13:49:55.902449 containerd[1785]: time="2025-01-30T13:49:55.902338994Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:55.902662 containerd[1785]: time="2025-01-30T13:49:55.902632701Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:55.902921 containerd[1785]: time="2025-01-30T13:49:55.902799905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:55.903367 containerd[1785]: time="2025-01-30T13:49:55.903258016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.739 [INFO][4866] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0 coredns-7db6d8ff4d- kube-system 4977d573-54f2-415d-abc0-e669e05a5801 756 0 2025-01-30 13:49:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-95297e853e coredns-7db6d8ff4d-6fc58 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib993b592120 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.739 [INFO][4866] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.780 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" HandleID="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.793 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" HandleID="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000304d90), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-95297e853e", "pod":"coredns-7db6d8ff4d-6fc58", "timestamp":"2025-01-30 13:49:55.780650717 +0000 UTC"}, Hostname:"ci-4081.3.0-a-95297e853e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.793 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.819 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.819 [INFO][4880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-95297e853e' Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.821 [INFO][4880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.827 [INFO][4880] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.848 [INFO][4880] ipam/ipam.go 489: Trying affinity for 192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.854 [INFO][4880] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.858 [INFO][4880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.859 [INFO][4880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.128/26 handle="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.861 [INFO][4880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760 Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.870 [INFO][4880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.128/26 handle="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.888 [INFO][4880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.130/26] block=192.168.27.128/26 handle="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.888 [INFO][4880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.130/26] handle="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.888 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:55.929417 containerd[1785]: 2025-01-30 13:49:55.888 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.130/26] IPv6=[] ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" HandleID="k8s-pod-network.5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.931144 containerd[1785]: 2025-01-30 13:49:55.891 [INFO][4866] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4977d573-54f2-415d-abc0-e669e05a5801", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"", Pod:"coredns-7db6d8ff4d-6fc58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib993b592120", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:55.931144 containerd[1785]: 2025-01-30 13:49:55.892 [INFO][4866] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.130/32] ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.931144 containerd[1785]: 2025-01-30 13:49:55.892 [INFO][4866] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib993b592120 ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.931144 containerd[1785]: 2025-01-30 13:49:55.899 [INFO][4866] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.931144 containerd[1785]: 2025-01-30 13:49:55.900 [INFO][4866] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4977d573-54f2-415d-abc0-e669e05a5801", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760", Pod:"coredns-7db6d8ff4d-6fc58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib993b592120", MAC:"9a:82:0b:95:2f:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:55.931144 containerd[1785]: 2025-01-30 13:49:55.922 [INFO][4866] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760" Namespace="kube-system" Pod="coredns-7db6d8ff4d-6fc58" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:49:55.988844 containerd[1785]: time="2025-01-30T13:49:55.984090127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:55.988844 containerd[1785]: time="2025-01-30T13:49:55.988308327Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:55.988844 containerd[1785]: time="2025-01-30T13:49:55.988336427Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:55.988844 containerd[1785]: time="2025-01-30T13:49:55.988442430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:56.008680 containerd[1785]: time="2025-01-30T13:49:56.008472303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5d9586bbdc-85zhb,Uid:3ddfe995-7c01-4e32-8dca-763b123eb964,Namespace:calico-system,Attempt:1,} returns sandbox id \"0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0\"" Jan 30 13:49:56.014119 containerd[1785]: time="2025-01-30T13:49:56.013246916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:49:56.054579 containerd[1785]: time="2025-01-30T13:49:56.054534592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-6fc58,Uid:4977d573-54f2-415d-abc0-e669e05a5801,Namespace:kube-system,Attempt:1,} returns sandbox id \"5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760\"" Jan 30 13:49:56.058153 containerd[1785]: time="2025-01-30T13:49:56.058119377Z" level=info msg="CreateContainer within sandbox \"5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:49:56.094569 containerd[1785]: time="2025-01-30T13:49:56.094519538Z" level=info msg="CreateContainer within sandbox \"5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1de722b6e88788f749b6fc3387abfc132e80f05f273735b75235587ece1ec4d9\"" Jan 30 13:49:56.095486 containerd[1785]: time="2025-01-30T13:49:56.095236755Z" level=info msg="StartContainer for \"1de722b6e88788f749b6fc3387abfc132e80f05f273735b75235587ece1ec4d9\"" Jan 30 13:49:56.144793 containerd[1785]: time="2025-01-30T13:49:56.144615322Z" level=info msg="StartContainer for \"1de722b6e88788f749b6fc3387abfc132e80f05f273735b75235587ece1ec4d9\" returns successfully" Jan 30 13:49:56.485645 containerd[1785]: time="2025-01-30T13:49:56.485181374Z" level=info msg="StopPodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\"" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.531 [INFO][5049] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.533 [INFO][5049] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" iface="eth0" netns="/var/run/netns/cni-7c181c5f-e169-949b-60e5-05b65cc74ab6" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.533 [INFO][5049] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" iface="eth0" netns="/var/run/netns/cni-7c181c5f-e169-949b-60e5-05b65cc74ab6" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.533 [INFO][5049] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" iface="eth0" netns="/var/run/netns/cni-7c181c5f-e169-949b-60e5-05b65cc74ab6" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.534 [INFO][5049] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.534 [INFO][5049] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.555 [INFO][5055] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.555 [INFO][5055] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.555 [INFO][5055] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.560 [WARNING][5055] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.560 [INFO][5055] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.561 [INFO][5055] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:56.564279 containerd[1785]: 2025-01-30 13:49:56.563 [INFO][5049] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:49:56.565414 containerd[1785]: time="2025-01-30T13:49:56.564470648Z" level=info msg="TearDown network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" successfully" Jan 30 13:49:56.565414 containerd[1785]: time="2025-01-30T13:49:56.564526750Z" level=info msg="StopPodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" returns successfully" Jan 30 13:49:56.565491 containerd[1785]: time="2025-01-30T13:49:56.565466372Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-vq9jp,Uid:385f3a3c-e141-44cf-93e1-7d18099ed6fc,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:49:56.613196 systemd[1]: run-netns-cni\x2d7c181c5f\x2de169\x2d949b\x2d60e5\x2d05b65cc74ab6.mount: Deactivated successfully. Jan 30 13:49:56.726050 kubelet[3438]: I0130 13:49:56.725838 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-6fc58" podStartSLOduration=37.725808463 podStartE2EDuration="37.725808463s" podCreationTimestamp="2025-01-30 13:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:56.697818401 +0000 UTC m=+51.748086543" watchObservedRunningTime="2025-01-30 13:49:56.725808463 +0000 UTC m=+51.776076605" Jan 30 13:49:56.813501 systemd-networkd[1361]: cali9db35f13f59: Link UP Jan 30 13:49:56.813727 systemd-networkd[1361]: cali9db35f13f59: Gained carrier Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.709 [INFO][5062] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0 calico-apiserver-5d7b4b4c89- calico-apiserver 385f3a3c-e141-44cf-93e1-7d18099ed6fc 770 0 2025-01-30 13:49:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7b4b4c89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-95297e853e calico-apiserver-5d7b4b4c89-vq9jp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9db35f13f59 [] []}} ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.710 [INFO][5062] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.768 [INFO][5074] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" HandleID="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.779 [INFO][5074] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" HandleID="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291310), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-95297e853e", "pod":"calico-apiserver-5d7b4b4c89-vq9jp", "timestamp":"2025-01-30 13:49:56.768762578 +0000 UTC"}, Hostname:"ci-4081.3.0-a-95297e853e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.779 [INFO][5074] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.779 [INFO][5074] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.779 [INFO][5074] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-95297e853e' Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.781 [INFO][5074] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.786 [INFO][5074] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.789 [INFO][5074] ipam/ipam.go 489: Trying affinity for 192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.790 [INFO][5074] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.792 [INFO][5074] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.792 [INFO][5074] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.128/26 handle="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.793 [INFO][5074] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657 Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.798 [INFO][5074] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.128/26 handle="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.807 [INFO][5074] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.131/26] block=192.168.27.128/26 handle="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.807 [INFO][5074] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.131/26] handle="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.807 [INFO][5074] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:56.837165 containerd[1785]: 2025-01-30 13:49:56.807 [INFO][5074] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.131/26] IPv6=[] ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" HandleID="k8s-pod-network.a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.838138 containerd[1785]: 2025-01-30 13:49:56.809 [INFO][5062] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"385f3a3c-e141-44cf-93e1-7d18099ed6fc", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"", Pod:"calico-apiserver-5d7b4b4c89-vq9jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9db35f13f59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:56.838138 containerd[1785]: 2025-01-30 13:49:56.809 [INFO][5062] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.131/32] ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.838138 containerd[1785]: 2025-01-30 13:49:56.809 [INFO][5062] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9db35f13f59 ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.838138 containerd[1785]: 2025-01-30 13:49:56.812 [INFO][5062] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.838138 containerd[1785]: 2025-01-30 13:49:56.812 [INFO][5062] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"385f3a3c-e141-44cf-93e1-7d18099ed6fc", ResourceVersion:"770", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657", Pod:"calico-apiserver-5d7b4b4c89-vq9jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9db35f13f59", MAC:"4e:f3:b9:4b:ea:9f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:56.838138 containerd[1785]: 2025-01-30 13:49:56.834 [INFO][5062] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-vq9jp" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:49:56.864278 systemd-networkd[1361]: calib9cbecc6018: Gained IPv6LL Jan 30 13:49:56.869142 containerd[1785]: time="2025-01-30T13:49:56.869048049Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:56.869142 containerd[1785]: time="2025-01-30T13:49:56.869098250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:56.869142 containerd[1785]: time="2025-01-30T13:49:56.869111751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:56.869478 containerd[1785]: time="2025-01-30T13:49:56.869213853Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:56.933181 containerd[1785]: time="2025-01-30T13:49:56.933127564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-vq9jp,Uid:385f3a3c-e141-44cf-93e1-7d18099ed6fc,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657\"" Jan 30 13:49:57.311185 systemd-networkd[1361]: calib993b592120: Gained IPv6LL Jan 30 13:49:57.486563 containerd[1785]: time="2025-01-30T13:49:57.485647027Z" level=info msg="StopPodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\"" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.542 [INFO][5151] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.543 [INFO][5151] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" iface="eth0" netns="/var/run/netns/cni-8010c31c-03a6-7ddf-a3a8-d24b63eab219" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.544 [INFO][5151] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" iface="eth0" netns="/var/run/netns/cni-8010c31c-03a6-7ddf-a3a8-d24b63eab219" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.545 [INFO][5151] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" iface="eth0" netns="/var/run/netns/cni-8010c31c-03a6-7ddf-a3a8-d24b63eab219" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.545 [INFO][5151] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.545 [INFO][5151] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.568 [INFO][5158] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.568 [INFO][5158] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.568 [INFO][5158] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.574 [WARNING][5158] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.574 [INFO][5158] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.576 [INFO][5158] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:57.578253 containerd[1785]: 2025-01-30 13:49:57.577 [INFO][5151] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:49:57.579704 containerd[1785]: time="2025-01-30T13:49:57.578562424Z" level=info msg="TearDown network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" successfully" Jan 30 13:49:57.579704 containerd[1785]: time="2025-01-30T13:49:57.578602625Z" level=info msg="StopPodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" returns successfully" Jan 30 13:49:57.579704 containerd[1785]: time="2025-01-30T13:49:57.579421444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46xv,Uid:c8a9341d-4cd7-4b79-b7a9-9f342499286e,Namespace:kube-system,Attempt:1,}" Jan 30 13:49:57.617655 systemd[1]: run-netns-cni\x2d8010c31c\x2d03a6\x2d7ddf\x2da3a8\x2dd24b63eab219.mount: Deactivated successfully. Jan 30 13:49:57.756124 systemd-networkd[1361]: cali08baf4d59f2: Link UP Jan 30 13:49:57.756550 systemd-networkd[1361]: cali08baf4d59f2: Gained carrier Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.667 [INFO][5165] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0 coredns-7db6d8ff4d- kube-system c8a9341d-4cd7-4b79-b7a9-9f342499286e 786 0 2025-01-30 13:49:19 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-95297e853e coredns-7db6d8ff4d-j46xv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali08baf4d59f2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.667 [INFO][5165] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.716 [INFO][5175] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" HandleID="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.724 [INFO][5175] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" HandleID="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed450), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-95297e853e", "pod":"coredns-7db6d8ff4d-j46xv", "timestamp":"2025-01-30 13:49:57.716669589 +0000 UTC"}, Hostname:"ci-4081.3.0-a-95297e853e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.724 [INFO][5175] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.724 [INFO][5175] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.724 [INFO][5175] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-95297e853e' Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.725 [INFO][5175] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.729 [INFO][5175] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.732 [INFO][5175] ipam/ipam.go 489: Trying affinity for 192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.734 [INFO][5175] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.736 [INFO][5175] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.736 [INFO][5175] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.128/26 handle="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.737 [INFO][5175] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5 Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.741 [INFO][5175] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.128/26 handle="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.748 [INFO][5175] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.132/26] block=192.168.27.128/26 handle="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.748 [INFO][5175] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.132/26] handle="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.748 [INFO][5175] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:57.779123 containerd[1785]: 2025-01-30 13:49:57.748 [INFO][5175] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.132/26] IPv6=[] ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" HandleID="k8s-pod-network.d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.781238 containerd[1785]: 2025-01-30 13:49:57.750 [INFO][5165] cni-plugin/k8s.go 386: Populated endpoint ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8a9341d-4cd7-4b79-b7a9-9f342499286e", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"", Pod:"coredns-7db6d8ff4d-j46xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08baf4d59f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:57.781238 containerd[1785]: 2025-01-30 13:49:57.750 [INFO][5165] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.132/32] ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.781238 containerd[1785]: 2025-01-30 13:49:57.750 [INFO][5165] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali08baf4d59f2 ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.781238 containerd[1785]: 2025-01-30 13:49:57.754 [INFO][5165] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:57.781238 containerd[1785]: 2025-01-30 13:49:57.755 [INFO][5165] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8a9341d-4cd7-4b79-b7a9-9f342499286e", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5", Pod:"coredns-7db6d8ff4d-j46xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08baf4d59f2", MAC:"d2:e0:8e:40:37:e4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:57.781238 containerd[1785]: 2025-01-30 13:49:57.774 [INFO][5165] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5" Namespace="kube-system" Pod="coredns-7db6d8ff4d-j46xv" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:49:58.063144 containerd[1785]: time="2025-01-30T13:49:58.062418963Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:58.063144 containerd[1785]: time="2025-01-30T13:49:58.062481765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:58.063144 containerd[1785]: time="2025-01-30T13:49:58.062517165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:58.063144 containerd[1785]: time="2025-01-30T13:49:58.062628868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:58.148295 containerd[1785]: time="2025-01-30T13:49:58.148244592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-j46xv,Uid:c8a9341d-4cd7-4b79-b7a9-9f342499286e,Namespace:kube-system,Attempt:1,} returns sandbox id \"d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5\"" Jan 30 13:49:58.157253 containerd[1785]: time="2025-01-30T13:49:58.157019700Z" level=info msg="CreateContainer within sandbox \"d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:49:58.197926 containerd[1785]: time="2025-01-30T13:49:58.197871866Z" level=info msg="CreateContainer within sandbox \"d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ddf46c0377f927fa959b45e3337c6c1160e31fe179b1a6f40214559ffa5d842f\"" Jan 30 13:49:58.200009 containerd[1785]: time="2025-01-30T13:49:58.199025593Z" level=info msg="StartContainer for \"ddf46c0377f927fa959b45e3337c6c1160e31fe179b1a6f40214559ffa5d842f\"" Jan 30 13:49:58.280921 containerd[1785]: time="2025-01-30T13:49:58.276220518Z" level=info msg="StartContainer for \"ddf46c0377f927fa959b45e3337c6c1160e31fe179b1a6f40214559ffa5d842f\" returns successfully" Jan 30 13:49:58.486115 containerd[1785]: time="2025-01-30T13:49:58.485684870Z" level=info msg="StopPodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\"" Jan 30 13:49:58.486455 containerd[1785]: time="2025-01-30T13:49:58.486428188Z" level=info msg="StopPodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\"" Jan 30 13:49:58.527871 systemd-networkd[1361]: cali9db35f13f59: Gained IPv6LL Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.596 [INFO][5304] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.597 [INFO][5304] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" iface="eth0" netns="/var/run/netns/cni-3c63a2ca-c19e-3dbd-aaaf-f5badeb04ebf" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.597 [INFO][5304] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" iface="eth0" netns="/var/run/netns/cni-3c63a2ca-c19e-3dbd-aaaf-f5badeb04ebf" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.598 [INFO][5304] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" iface="eth0" netns="/var/run/netns/cni-3c63a2ca-c19e-3dbd-aaaf-f5badeb04ebf" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.598 [INFO][5304] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.598 [INFO][5304] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.651 [INFO][5317] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.651 [INFO][5317] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.651 [INFO][5317] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.663 [WARNING][5317] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.663 [INFO][5317] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.665 [INFO][5317] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:58.669502 containerd[1785]: 2025-01-30 13:49:58.667 [INFO][5304] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:49:58.672760 containerd[1785]: time="2025-01-30T13:49:58.669859424Z" level=info msg="TearDown network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" successfully" Jan 30 13:49:58.672760 containerd[1785]: time="2025-01-30T13:49:58.669895025Z" level=info msg="StopPodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" returns successfully" Jan 30 13:49:58.675776 systemd[1]: run-netns-cni\x2d3c63a2ca\x2dc19e\x2d3dbd\x2daaaf\x2df5badeb04ebf.mount: Deactivated successfully. Jan 30 13:49:58.676311 containerd[1785]: time="2025-01-30T13:49:58.676029070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbr7p,Uid:47432003-ec2a-4f52-b92d-2b12925f250f,Namespace:calico-system,Attempt:1,}" Jan 30 13:49:58.741032 kubelet[3438]: I0130 13:49:58.737562 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-j46xv" podStartSLOduration=39.737537924 podStartE2EDuration="39.737537924s" podCreationTimestamp="2025-01-30 13:49:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:49:58.736840008 +0000 UTC m=+53.787108050" watchObservedRunningTime="2025-01-30 13:49:58.737537924 +0000 UTC m=+53.787805966" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.640 [INFO][5305] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.640 [INFO][5305] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" iface="eth0" netns="/var/run/netns/cni-7a316965-88ae-6933-6fa0-68666cab245e" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.641 [INFO][5305] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" iface="eth0" netns="/var/run/netns/cni-7a316965-88ae-6933-6fa0-68666cab245e" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.641 [INFO][5305] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" iface="eth0" netns="/var/run/netns/cni-7a316965-88ae-6933-6fa0-68666cab245e" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.641 [INFO][5305] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.641 [INFO][5305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.710 [INFO][5322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.710 [INFO][5322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.710 [INFO][5322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.728 [WARNING][5322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.728 [INFO][5322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.735 [INFO][5322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:58.754006 containerd[1785]: 2025-01-30 13:49:58.745 [INFO][5305] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:49:58.756273 containerd[1785]: time="2025-01-30T13:49:58.755497949Z" level=info msg="TearDown network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" successfully" Jan 30 13:49:58.756273 containerd[1785]: time="2025-01-30T13:49:58.755533450Z" level=info msg="StopPodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" returns successfully" Jan 30 13:49:58.768641 containerd[1785]: time="2025-01-30T13:49:58.763894247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-d7hdd,Uid:3c02d11f-d543-425f-a360-1b4417202889,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:49:58.772598 systemd[1]: run-netns-cni\x2d7a316965\x2d88ae\x2d6933\x2d6fa0\x2d68666cab245e.mount: Deactivated successfully. Jan 30 13:49:59.090580 systemd-networkd[1361]: cali51f7b1bc2d7: Link UP Jan 30 13:49:59.092861 systemd-networkd[1361]: cali51f7b1bc2d7: Gained carrier Jan 30 13:49:59.103644 systemd-networkd[1361]: cali08baf4d59f2: Gained IPv6LL Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:58.868 [INFO][5332] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0 csi-node-driver- calico-system 47432003-ec2a-4f52-b92d-2b12925f250f 796 0 2025-01-30 13:49:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-95297e853e csi-node-driver-pbr7p eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali51f7b1bc2d7 [] []}} ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:58.869 [INFO][5332] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.003 [INFO][5358] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" HandleID="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.025 [INFO][5358] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" HandleID="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edbc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-95297e853e", "pod":"csi-node-driver-pbr7p", "timestamp":"2025-01-30 13:49:59.003297307 +0000 UTC"}, Hostname:"ci-4081.3.0-a-95297e853e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.025 [INFO][5358] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.025 [INFO][5358] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.025 [INFO][5358] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-95297e853e' Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.029 [INFO][5358] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.044 [INFO][5358] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.054 [INFO][5358] ipam/ipam.go 489: Trying affinity for 192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.056 [INFO][5358] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.061 [INFO][5358] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.061 [INFO][5358] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.128/26 handle="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.063 [INFO][5358] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232 Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.070 [INFO][5358] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.128/26 handle="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.081 [INFO][5358] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.133/26] block=192.168.27.128/26 handle="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.081 [INFO][5358] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.133/26] handle="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.081 [INFO][5358] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:59.119788 containerd[1785]: 2025-01-30 13:49:59.081 [INFO][5358] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.133/26] IPv6=[] ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" HandleID="k8s-pod-network.55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.121407 containerd[1785]: 2025-01-30 13:49:59.085 [INFO][5332] cni-plugin/k8s.go 386: Populated endpoint ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47432003-ec2a-4f52-b92d-2b12925f250f", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"", Pod:"csi-node-driver-pbr7p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51f7b1bc2d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:59.121407 containerd[1785]: 2025-01-30 13:49:59.085 [INFO][5332] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.133/32] ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.121407 containerd[1785]: 2025-01-30 13:49:59.085 [INFO][5332] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali51f7b1bc2d7 ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.121407 containerd[1785]: 2025-01-30 13:49:59.088 [INFO][5332] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.121407 containerd[1785]: 2025-01-30 13:49:59.088 [INFO][5332] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47432003-ec2a-4f52-b92d-2b12925f250f", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232", Pod:"csi-node-driver-pbr7p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51f7b1bc2d7", MAC:"d6:18:f9:10:95:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:59.121407 containerd[1785]: 2025-01-30 13:49:59.112 [INFO][5332] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232" Namespace="calico-system" Pod="csi-node-driver-pbr7p" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:49:59.182582 containerd[1785]: time="2025-01-30T13:49:59.181250915Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:59.182582 containerd[1785]: time="2025-01-30T13:49:59.182337940Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:59.182582 containerd[1785]: time="2025-01-30T13:49:59.182355241Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:59.182582 containerd[1785]: time="2025-01-30T13:49:59.182505344Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:59.243053 systemd-networkd[1361]: cali7fc3c317de0: Link UP Jan 30 13:49:59.245726 systemd-networkd[1361]: cali7fc3c317de0: Gained carrier Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:58.993 [INFO][5347] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0 calico-apiserver-5d7b4b4c89- calico-apiserver 3c02d11f-d543-425f-a360-1b4417202889 797 0 2025-01-30 13:49:28 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5d7b4b4c89 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-95297e853e calico-apiserver-5d7b4b4c89-d7hdd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7fc3c317de0 [] []}} ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:58.994 [INFO][5347] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.081 [INFO][5366] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" HandleID="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.120 [INFO][5366] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" HandleID="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000366cd0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-95297e853e", "pod":"calico-apiserver-5d7b4b4c89-d7hdd", "timestamp":"2025-01-30 13:49:59.081111947 +0000 UTC"}, Hostname:"ci-4081.3.0-a-95297e853e", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.120 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.121 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.121 [INFO][5366] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-95297e853e' Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.125 [INFO][5366] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.138 [INFO][5366] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.148 [INFO][5366] ipam/ipam.go 489: Trying affinity for 192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.150 [INFO][5366] ipam/ipam.go 155: Attempting to load block cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.154 [INFO][5366] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.27.128/26 host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.154 [INFO][5366] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.27.128/26 handle="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.159 [INFO][5366] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.178 [INFO][5366] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.27.128/26 handle="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.192 [INFO][5366] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.27.134/26] block=192.168.27.128/26 handle="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.193 [INFO][5366] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.27.134/26] handle="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" host="ci-4081.3.0-a-95297e853e" Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.198 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:49:59.307086 containerd[1785]: 2025-01-30 13:49:59.199 [INFO][5366] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.27.134/26] IPv6=[] ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" HandleID="k8s-pod-network.e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.308771 containerd[1785]: 2025-01-30 13:49:59.222 [INFO][5347] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c02d11f-d543-425f-a360-1b4417202889", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"", Pod:"calico-apiserver-5d7b4b4c89-d7hdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fc3c317de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:59.308771 containerd[1785]: 2025-01-30 13:49:59.224 [INFO][5347] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.27.134/32] ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.308771 containerd[1785]: 2025-01-30 13:49:59.224 [INFO][5347] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7fc3c317de0 ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.308771 containerd[1785]: 2025-01-30 13:49:59.245 [INFO][5347] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.308771 containerd[1785]: 2025-01-30 13:49:59.247 [INFO][5347] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c02d11f-d543-425f-a360-1b4417202889", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f", Pod:"calico-apiserver-5d7b4b4c89-d7hdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fc3c317de0", MAC:"3e:e1:5d:11:a6:57", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:49:59.308771 containerd[1785]: 2025-01-30 13:49:59.288 [INFO][5347] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f" Namespace="calico-apiserver" Pod="calico-apiserver-5d7b4b4c89-d7hdd" WorkloadEndpoint="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:49:59.344586 containerd[1785]: time="2025-01-30T13:49:59.344415272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-pbr7p,Uid:47432003-ec2a-4f52-b92d-2b12925f250f,Namespace:calico-system,Attempt:1,} returns sandbox id \"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232\"" Jan 30 13:49:59.391818 containerd[1785]: time="2025-01-30T13:49:59.391609388Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:49:59.391818 containerd[1785]: time="2025-01-30T13:49:59.391674690Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:49:59.391818 containerd[1785]: time="2025-01-30T13:49:59.391695290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:59.392414 containerd[1785]: time="2025-01-30T13:49:59.391796092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:49:59.482199 containerd[1785]: time="2025-01-30T13:49:59.482163929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5d7b4b4c89-d7hdd,Uid:3c02d11f-d543-425f-a360-1b4417202889,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f\"" Jan 30 13:49:59.498783 containerd[1785]: time="2025-01-30T13:49:59.498736921Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:59.511205 containerd[1785]: time="2025-01-30T13:49:59.511136114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:49:59.529075 containerd[1785]: time="2025-01-30T13:49:59.528999136Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:59.538281 containerd[1785]: time="2025-01-30T13:49:59.538212054Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:49:59.539045 containerd[1785]: time="2025-01-30T13:49:59.538864869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.525573252s" Jan 30 13:49:59.539045 containerd[1785]: time="2025-01-30T13:49:59.538905870Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:49:59.540877 containerd[1785]: time="2025-01-30T13:49:59.540693713Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:49:59.548330 containerd[1785]: time="2025-01-30T13:49:59.548300493Z" level=info msg="CreateContainer within sandbox \"0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:49:59.631088 containerd[1785]: time="2025-01-30T13:49:59.630981047Z" level=info msg="CreateContainer within sandbox \"0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"11f2ea09836a225087751090d74bea5aba6671ffef70bb9060cb32315ef5c317\"" Jan 30 13:49:59.633543 containerd[1785]: time="2025-01-30T13:49:59.632138675Z" level=info msg="StartContainer for \"11f2ea09836a225087751090d74bea5aba6671ffef70bb9060cb32315ef5c317\"" Jan 30 13:49:59.710495 containerd[1785]: time="2025-01-30T13:49:59.709971215Z" level=info msg="StartContainer for \"11f2ea09836a225087751090d74bea5aba6671ffef70bb9060cb32315ef5c317\" returns successfully" Jan 30 13:49:59.742349 kubelet[3438]: I0130 13:49:59.742280 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5d9586bbdc-85zhb" podStartSLOduration=28.214626077 podStartE2EDuration="31.742165276s" podCreationTimestamp="2025-01-30 13:49:28 +0000 UTC" firstStartedPulling="2025-01-30 13:49:56.012364895 +0000 UTC m=+51.062633037" lastFinishedPulling="2025-01-30 13:49:59.539904194 +0000 UTC m=+54.590172236" observedRunningTime="2025-01-30 13:49:59.741310756 +0000 UTC m=+54.791578798" watchObservedRunningTime="2025-01-30 13:49:59.742165276 +0000 UTC m=+54.792433318" Jan 30 13:50:00.383099 systemd-networkd[1361]: cali7fc3c317de0: Gained IPv6LL Jan 30 13:50:00.703252 systemd-networkd[1361]: cali51f7b1bc2d7: Gained IPv6LL Jan 30 13:50:02.842967 containerd[1785]: time="2025-01-30T13:50:02.842906596Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:02.850017 containerd[1785]: time="2025-01-30T13:50:02.849885361Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:50:02.862353 containerd[1785]: time="2025-01-30T13:50:02.862282455Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:02.873223 containerd[1785]: time="2025-01-30T13:50:02.873156413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:02.874572 containerd[1785]: time="2025-01-30T13:50:02.874393942Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.333668729s" Jan 30 13:50:02.874572 containerd[1785]: time="2025-01-30T13:50:02.874447443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:50:02.878776 containerd[1785]: time="2025-01-30T13:50:02.878349236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:50:02.880914 containerd[1785]: time="2025-01-30T13:50:02.880884996Z" level=info msg="CreateContainer within sandbox \"a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:50:02.923751 containerd[1785]: time="2025-01-30T13:50:02.923698810Z" level=info msg="CreateContainer within sandbox \"a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"c0f9bb70a4c6c7d5893ef3fd1bca043856870f726dbb17e3ef1f85deaa156707\"" Jan 30 13:50:02.924765 containerd[1785]: time="2025-01-30T13:50:02.924421427Z" level=info msg="StartContainer for \"c0f9bb70a4c6c7d5893ef3fd1bca043856870f726dbb17e3ef1f85deaa156707\"" Jan 30 13:50:03.009601 containerd[1785]: time="2025-01-30T13:50:03.009543544Z" level=info msg="StartContainer for \"c0f9bb70a4c6c7d5893ef3fd1bca043856870f726dbb17e3ef1f85deaa156707\" returns successfully" Jan 30 13:50:03.758973 kubelet[3438]: I0130 13:50:03.755996 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-vq9jp" podStartSLOduration=29.813496924 podStartE2EDuration="35.75596963s" podCreationTimestamp="2025-01-30 13:49:28 +0000 UTC" firstStartedPulling="2025-01-30 13:49:56.934671701 +0000 UTC m=+51.984939743" lastFinishedPulling="2025-01-30 13:50:02.877144407 +0000 UTC m=+57.927412449" observedRunningTime="2025-01-30 13:50:03.755529019 +0000 UTC m=+58.805797161" watchObservedRunningTime="2025-01-30 13:50:03.75596963 +0000 UTC m=+58.806237772" Jan 30 13:50:04.417576 containerd[1785]: time="2025-01-30T13:50:04.417529005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:04.420029 containerd[1785]: time="2025-01-30T13:50:04.419962662Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:50:04.425540 containerd[1785]: time="2025-01-30T13:50:04.425486593Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:04.431899 containerd[1785]: time="2025-01-30T13:50:04.430455411Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:04.431899 containerd[1785]: time="2025-01-30T13:50:04.431791943Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.553398606s" Jan 30 13:50:04.431899 containerd[1785]: time="2025-01-30T13:50:04.431826644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:50:04.435712 containerd[1785]: time="2025-01-30T13:50:04.435061720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:50:04.436179 containerd[1785]: time="2025-01-30T13:50:04.436148946Z" level=info msg="CreateContainer within sandbox \"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:50:04.481791 containerd[1785]: time="2025-01-30T13:50:04.481744726Z" level=info msg="CreateContainer within sandbox \"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aaabce33ee556af949148925e22766c3bc9c9a44cc6ba57b19eaba97358ef82e\"" Jan 30 13:50:04.482565 containerd[1785]: time="2025-01-30T13:50:04.482462743Z" level=info msg="StartContainer for \"aaabce33ee556af949148925e22766c3bc9c9a44cc6ba57b19eaba97358ef82e\"" Jan 30 13:50:04.547751 containerd[1785]: time="2025-01-30T13:50:04.547710689Z" level=info msg="StartContainer for \"aaabce33ee556af949148925e22766c3bc9c9a44cc6ba57b19eaba97358ef82e\" returns successfully" Jan 30 13:50:04.745333 kubelet[3438]: I0130 13:50:04.745293 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:04.875777 containerd[1785]: time="2025-01-30T13:50:04.875717161Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:04.878348 containerd[1785]: time="2025-01-30T13:50:04.878283222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:50:04.880287 containerd[1785]: time="2025-01-30T13:50:04.880256569Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 445.163347ms" Jan 30 13:50:04.880287 containerd[1785]: time="2025-01-30T13:50:04.880291869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:50:04.881889 containerd[1785]: time="2025-01-30T13:50:04.881318594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:50:04.883312 containerd[1785]: time="2025-01-30T13:50:04.882918032Z" level=info msg="CreateContainer within sandbox \"e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:50:04.936883 containerd[1785]: time="2025-01-30T13:50:04.936841609Z" level=info msg="CreateContainer within sandbox \"e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"bdf0a598c1d11f789049c24336d57511d5b88a1278573a66b6f0d2b65f7d67f6\"" Jan 30 13:50:04.937582 containerd[1785]: time="2025-01-30T13:50:04.937485525Z" level=info msg="StartContainer for \"bdf0a598c1d11f789049c24336d57511d5b88a1278573a66b6f0d2b65f7d67f6\"" Jan 30 13:50:05.010982 containerd[1785]: time="2025-01-30T13:50:05.010832762Z" level=info msg="StartContainer for \"bdf0a598c1d11f789049c24336d57511d5b88a1278573a66b6f0d2b65f7d67f6\" returns successfully" Jan 30 13:50:05.455362 containerd[1785]: time="2025-01-30T13:50:05.455111689Z" level=info msg="StopPodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\"" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.514 [WARNING][5686] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"385f3a3c-e141-44cf-93e1-7d18099ed6fc", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657", Pod:"calico-apiserver-5d7b4b4c89-vq9jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9db35f13f59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.514 [INFO][5686] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.514 [INFO][5686] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" iface="eth0" netns="" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.514 [INFO][5686] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.515 [INFO][5686] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.558 [INFO][5694] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.558 [INFO][5694] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.559 [INFO][5694] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.565 [WARNING][5694] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.565 [INFO][5694] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.567 [INFO][5694] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:05.572002 containerd[1785]: 2025-01-30 13:50:05.569 [INFO][5686] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.572002 containerd[1785]: time="2025-01-30T13:50:05.571725452Z" level=info msg="TearDown network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" successfully" Jan 30 13:50:05.572002 containerd[1785]: time="2025-01-30T13:50:05.571776053Z" level=info msg="StopPodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" returns successfully" Jan 30 13:50:05.572740 containerd[1785]: time="2025-01-30T13:50:05.572532771Z" level=info msg="RemovePodSandbox for \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\"" Jan 30 13:50:05.572740 containerd[1785]: time="2025-01-30T13:50:05.572566672Z" level=info msg="Forcibly stopping sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\"" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.615 [WARNING][5712] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"385f3a3c-e141-44cf-93e1-7d18099ed6fc", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"a3971b95da17d2c0eb4d31b09f937ecd6b92ec7fcffe47b25d59ae6bdf194657", Pod:"calico-apiserver-5d7b4b4c89-vq9jp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9db35f13f59", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.615 [INFO][5712] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.615 [INFO][5712] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" iface="eth0" netns="" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.615 [INFO][5712] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.615 [INFO][5712] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.647 [INFO][5718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.647 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.647 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.652 [WARNING][5718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.652 [INFO][5718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" HandleID="k8s-pod-network.815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--vq9jp-eth0" Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.654 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:05.657190 containerd[1785]: 2025-01-30 13:50:05.655 [INFO][5712] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914" Jan 30 13:50:05.658032 containerd[1785]: time="2025-01-30T13:50:05.657267779Z" level=info msg="TearDown network for sandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" successfully" Jan 30 13:50:05.676156 containerd[1785]: time="2025-01-30T13:50:05.676102125Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:05.676321 containerd[1785]: time="2025-01-30T13:50:05.676222728Z" level=info msg="RemovePodSandbox \"815e926342bab51978f07be9c1c1c03e83c679cd833f584f4ab0b4af85dda914\" returns successfully" Jan 30 13:50:05.677636 containerd[1785]: time="2025-01-30T13:50:05.676902444Z" level=info msg="StopPodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\"" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.751 [WARNING][5737] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47432003-ec2a-4f52-b92d-2b12925f250f", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232", Pod:"csi-node-driver-pbr7p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51f7b1bc2d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.751 [INFO][5737] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.751 [INFO][5737] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" iface="eth0" netns="" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.752 [INFO][5737] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.752 [INFO][5737] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.827 [INFO][5744] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.827 [INFO][5744] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.827 [INFO][5744] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.832 [WARNING][5744] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.832 [INFO][5744] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.833 [INFO][5744] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:05.835712 containerd[1785]: 2025-01-30 13:50:05.834 [INFO][5737] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.836341 containerd[1785]: time="2025-01-30T13:50:05.835715907Z" level=info msg="TearDown network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" successfully" Jan 30 13:50:05.836341 containerd[1785]: time="2025-01-30T13:50:05.835755008Z" level=info msg="StopPodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" returns successfully" Jan 30 13:50:05.836417 containerd[1785]: time="2025-01-30T13:50:05.836365823Z" level=info msg="RemovePodSandbox for \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\"" Jan 30 13:50:05.836417 containerd[1785]: time="2025-01-30T13:50:05.836398923Z" level=info msg="Forcibly stopping sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\"" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.872 [WARNING][5764] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"47432003-ec2a-4f52-b92d-2b12925f250f", ResourceVersion:"809", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232", Pod:"csi-node-driver-pbr7p", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.27.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali51f7b1bc2d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.872 [INFO][5764] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.872 [INFO][5764] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" iface="eth0" netns="" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.872 [INFO][5764] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.873 [INFO][5764] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.891 [INFO][5770] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.892 [INFO][5770] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.892 [INFO][5770] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.897 [WARNING][5770] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.897 [INFO][5770] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" HandleID="k8s-pod-network.98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Workload="ci--4081.3.0--a--95297e853e-k8s-csi--node--driver--pbr7p-eth0" Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.899 [INFO][5770] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:05.903011 containerd[1785]: 2025-01-30 13:50:05.900 [INFO][5764] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2" Jan 30 13:50:05.903011 containerd[1785]: time="2025-01-30T13:50:05.901239860Z" level=info msg="TearDown network for sandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" successfully" Jan 30 13:50:05.918613 containerd[1785]: time="2025-01-30T13:50:05.918559570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:05.918772 containerd[1785]: time="2025-01-30T13:50:05.918689173Z" level=info msg="RemovePodSandbox \"98a32cb352590cec139f9744c9d721aeaa2f4702574f08885f4223f9c3f563d2\" returns successfully" Jan 30 13:50:05.933753 containerd[1785]: time="2025-01-30T13:50:05.933711029Z" level=info msg="StopPodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\"" Jan 30 13:50:06.037219 kubelet[3438]: I0130 13:50:06.037072 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-5d7b4b4c89-d7hdd" podStartSLOduration=32.639673456 podStartE2EDuration="38.037047878s" podCreationTimestamp="2025-01-30 13:49:28 +0000 UTC" firstStartedPulling="2025-01-30 13:49:59.483759167 +0000 UTC m=+54.534027209" lastFinishedPulling="2025-01-30 13:50:04.881133589 +0000 UTC m=+59.931401631" observedRunningTime="2025-01-30 13:50:05.78013899 +0000 UTC m=+60.830407132" watchObservedRunningTime="2025-01-30 13:50:06.037047878 +0000 UTC m=+61.087315920" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:05.988 [WARNING][5788] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8a9341d-4cd7-4b79-b7a9-9f342499286e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5", Pod:"coredns-7db6d8ff4d-j46xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08baf4d59f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:05.989 [INFO][5788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:05.989 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" iface="eth0" netns="" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:05.989 [INFO][5788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:05.989 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.049 [INFO][5794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.050 [INFO][5794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.050 [INFO][5794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.059 [WARNING][5794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.059 [INFO][5794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.064 [INFO][5794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.068231 containerd[1785]: 2025-01-30 13:50:06.066 [INFO][5788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.068231 containerd[1785]: time="2025-01-30T13:50:06.068197116Z" level=info msg="TearDown network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" successfully" Jan 30 13:50:06.068942 containerd[1785]: time="2025-01-30T13:50:06.068241217Z" level=info msg="StopPodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" returns successfully" Jan 30 13:50:06.071602 containerd[1785]: time="2025-01-30T13:50:06.069915656Z" level=info msg="RemovePodSandbox for \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\"" Jan 30 13:50:06.071602 containerd[1785]: time="2025-01-30T13:50:06.070457469Z" level=info msg="Forcibly stopping sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\"" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.141 [WARNING][5814] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c8a9341d-4cd7-4b79-b7a9-9f342499286e", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"d68d4ec1eb51bca23e9bf69717b48961f8e620145c7deb83f8861b45ad5a7ba5", Pod:"coredns-7db6d8ff4d-j46xv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali08baf4d59f2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.141 [INFO][5814] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.142 [INFO][5814] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" iface="eth0" netns="" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.142 [INFO][5814] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.142 [INFO][5814] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.179 [INFO][5821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.179 [INFO][5821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.179 [INFO][5821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.192 [WARNING][5821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.192 [INFO][5821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" HandleID="k8s-pod-network.f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--j46xv-eth0" Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.197 [INFO][5821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.199973 containerd[1785]: 2025-01-30 13:50:06.198 [INFO][5814] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db" Jan 30 13:50:06.201233 containerd[1785]: time="2025-01-30T13:50:06.200411048Z" level=info msg="TearDown network for sandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" successfully" Jan 30 13:50:06.211591 containerd[1785]: time="2025-01-30T13:50:06.211263705Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:06.211591 containerd[1785]: time="2025-01-30T13:50:06.211379008Z" level=info msg="RemovePodSandbox \"f7fee14870bcf9783ef12b2ede6735d6bb4645d308f11bf8f1f44cde74d184db\" returns successfully" Jan 30 13:50:06.212626 containerd[1785]: time="2025-01-30T13:50:06.212281530Z" level=info msg="StopPodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\"" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.248 [WARNING][5839] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0", GenerateName:"calico-kube-controllers-5d9586bbdc-", Namespace:"calico-system", SelfLink:"", UID:"3ddfe995-7c01-4e32-8dca-763b123eb964", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9586bbdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0", Pod:"calico-kube-controllers-5d9586bbdc-85zhb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9cbecc6018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.249 [INFO][5839] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.249 [INFO][5839] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" iface="eth0" netns="" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.249 [INFO][5839] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.249 [INFO][5839] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.268 [INFO][5846] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.269 [INFO][5846] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.269 [INFO][5846] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.274 [WARNING][5846] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.274 [INFO][5846] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.276 [INFO][5846] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.278406 containerd[1785]: 2025-01-30 13:50:06.277 [INFO][5839] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.279176 containerd[1785]: time="2025-01-30T13:50:06.278463398Z" level=info msg="TearDown network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" successfully" Jan 30 13:50:06.279176 containerd[1785]: time="2025-01-30T13:50:06.278494498Z" level=info msg="StopPodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" returns successfully" Jan 30 13:50:06.279416 containerd[1785]: time="2025-01-30T13:50:06.279384619Z" level=info msg="RemovePodSandbox for \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\"" Jan 30 13:50:06.279501 containerd[1785]: time="2025-01-30T13:50:06.279422020Z" level=info msg="Forcibly stopping sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\"" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.316 [WARNING][5864] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0", GenerateName:"calico-kube-controllers-5d9586bbdc-", Namespace:"calico-system", SelfLink:"", UID:"3ddfe995-7c01-4e32-8dca-763b123eb964", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5d9586bbdc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"0ca751fdfe072cabfa9f8c9d0652262239808457a20e5fce49d5ac8a522936f0", Pod:"calico-kube-controllers-5d9586bbdc-85zhb", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.27.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib9cbecc6018", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.316 [INFO][5864] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.316 [INFO][5864] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" iface="eth0" netns="" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.316 [INFO][5864] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.316 [INFO][5864] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.334 [INFO][5870] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.335 [INFO][5870] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.335 [INFO][5870] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.341 [WARNING][5870] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.341 [INFO][5870] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" HandleID="k8s-pod-network.df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--kube--controllers--5d9586bbdc--85zhb-eth0" Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.342 [INFO][5870] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.344838 containerd[1785]: 2025-01-30 13:50:06.343 [INFO][5864] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd" Jan 30 13:50:06.345529 containerd[1785]: time="2025-01-30T13:50:06.344869771Z" level=info msg="TearDown network for sandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" successfully" Jan 30 13:50:06.355522 containerd[1785]: time="2025-01-30T13:50:06.355467522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:06.355686 containerd[1785]: time="2025-01-30T13:50:06.355553024Z" level=info msg="RemovePodSandbox \"df53dffe54efda3366d0b639a0f4dfae3e94eecfdd6cab597f14d536ee3290dd\" returns successfully" Jan 30 13:50:06.356276 containerd[1785]: time="2025-01-30T13:50:06.356239640Z" level=info msg="StopPodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\"" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.388 [WARNING][5888] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c02d11f-d543-425f-a360-1b4417202889", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f", Pod:"calico-apiserver-5d7b4b4c89-d7hdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fc3c317de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.388 [INFO][5888] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.388 [INFO][5888] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" iface="eth0" netns="" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.388 [INFO][5888] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.388 [INFO][5888] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.407 [INFO][5894] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.407 [INFO][5894] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.407 [INFO][5894] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.413 [WARNING][5894] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.413 [INFO][5894] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.415 [INFO][5894] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.418823 containerd[1785]: 2025-01-30 13:50:06.417 [INFO][5888] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.418823 containerd[1785]: time="2025-01-30T13:50:06.418746822Z" level=info msg="TearDown network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" successfully" Jan 30 13:50:06.418823 containerd[1785]: time="2025-01-30T13:50:06.418775122Z" level=info msg="StopPodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" returns successfully" Jan 30 13:50:06.420287 containerd[1785]: time="2025-01-30T13:50:06.420161355Z" level=info msg="RemovePodSandbox for \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\"" Jan 30 13:50:06.420287 containerd[1785]: time="2025-01-30T13:50:06.420213656Z" level=info msg="Forcibly stopping sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\"" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.461 [WARNING][5912] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0", GenerateName:"calico-apiserver-5d7b4b4c89-", Namespace:"calico-apiserver", SelfLink:"", UID:"3c02d11f-d543-425f-a360-1b4417202889", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5d7b4b4c89", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"e58f08b265490f559dc3eb6c15818a0d4edc19c113791c33bddae17ff0519f7f", Pod:"calico-apiserver-5d7b4b4c89-d7hdd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.27.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7fc3c317de0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.461 [INFO][5912] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.462 [INFO][5912] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" iface="eth0" netns="" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.462 [INFO][5912] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.462 [INFO][5912] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.480 [INFO][5918] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.480 [INFO][5918] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.481 [INFO][5918] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.487 [WARNING][5918] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.487 [INFO][5918] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" HandleID="k8s-pod-network.dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Workload="ci--4081.3.0--a--95297e853e-k8s-calico--apiserver--5d7b4b4c89--d7hdd-eth0" Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.488 [INFO][5918] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.491151 containerd[1785]: 2025-01-30 13:50:06.489 [INFO][5912] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03" Jan 30 13:50:06.491151 containerd[1785]: time="2025-01-30T13:50:06.490758228Z" level=info msg="TearDown network for sandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" successfully" Jan 30 13:50:06.497673 containerd[1785]: time="2025-01-30T13:50:06.497639491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:06.497754 containerd[1785]: time="2025-01-30T13:50:06.497712093Z" level=info msg="RemovePodSandbox \"dc0cfc1860cfd6c49aed9e5cc7e04efc55d77f3a3c42c5e401a107a6f50baf03\" returns successfully" Jan 30 13:50:06.498332 containerd[1785]: time="2025-01-30T13:50:06.498294506Z" level=info msg="StopPodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\"" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.534 [WARNING][5936] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4977d573-54f2-415d-abc0-e669e05a5801", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760", Pod:"coredns-7db6d8ff4d-6fc58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib993b592120", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.534 [INFO][5936] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.534 [INFO][5936] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" iface="eth0" netns="" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.534 [INFO][5936] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.534 [INFO][5936] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.553 [INFO][5942] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.553 [INFO][5942] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.553 [INFO][5942] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.559 [WARNING][5942] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.559 [INFO][5942] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.560 [INFO][5942] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.563109 containerd[1785]: 2025-01-30 13:50:06.562 [INFO][5936] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.564235 containerd[1785]: time="2025-01-30T13:50:06.563159643Z" level=info msg="TearDown network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" successfully" Jan 30 13:50:06.564235 containerd[1785]: time="2025-01-30T13:50:06.563191244Z" level=info msg="StopPodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" returns successfully" Jan 30 13:50:06.564235 containerd[1785]: time="2025-01-30T13:50:06.563718056Z" level=info msg="RemovePodSandbox for \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\"" Jan 30 13:50:06.564235 containerd[1785]: time="2025-01-30T13:50:06.563786858Z" level=info msg="Forcibly stopping sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\"" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.600 [WARNING][5960] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"4977d573-54f2-415d-abc0-e669e05a5801", ResourceVersion:"774", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 49, 19, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-95297e853e", ContainerID:"5cb403682a1ee1445772020ebf92cd6c9d2de8b2fbc76c452ee14110f2068760", Pod:"coredns-7db6d8ff4d-6fc58", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.27.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib993b592120", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.601 [INFO][5960] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.601 [INFO][5960] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" iface="eth0" netns="" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.601 [INFO][5960] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.601 [INFO][5960] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.620 [INFO][5967] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.620 [INFO][5967] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.620 [INFO][5967] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.625 [WARNING][5967] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.625 [INFO][5967] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" HandleID="k8s-pod-network.91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Workload="ci--4081.3.0--a--95297e853e-k8s-coredns--7db6d8ff4d--6fc58-eth0" Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.627 [INFO][5967] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:50:06.629811 containerd[1785]: 2025-01-30 13:50:06.628 [INFO][5960] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff" Jan 30 13:50:06.630498 containerd[1785]: time="2025-01-30T13:50:06.629876924Z" level=info msg="TearDown network for sandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" successfully" Jan 30 13:50:06.638150 containerd[1785]: time="2025-01-30T13:50:06.638109819Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:50:06.638298 containerd[1785]: time="2025-01-30T13:50:06.638183521Z" level=info msg="RemovePodSandbox \"91b5878d1d35f1b4d3b659a386270153fe596d53eb5d70f664bcbde0d23e0cff\" returns successfully" Jan 30 13:50:07.025508 containerd[1785]: time="2025-01-30T13:50:07.025454797Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:07.028594 containerd[1785]: time="2025-01-30T13:50:07.028524170Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:50:07.033966 containerd[1785]: time="2025-01-30T13:50:07.033901397Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:07.039761 containerd[1785]: time="2025-01-30T13:50:07.039732535Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:50:07.040550 containerd[1785]: time="2025-01-30T13:50:07.040454052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.159094958s" Jan 30 13:50:07.040550 containerd[1785]: time="2025-01-30T13:50:07.040499953Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:50:07.043358 containerd[1785]: time="2025-01-30T13:50:07.043329120Z" level=info msg="CreateContainer within sandbox \"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:50:07.083680 containerd[1785]: time="2025-01-30T13:50:07.083619475Z" level=info msg="CreateContainer within sandbox \"55c75af1e579d7152b3588fe81ea7204fe9c6bdd30ade163b880e674f7fb9232\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"95c0fc5e3c0bd9dea02838d655240aa1415f6d7a2c86f05c27656e0d5b857960\"" Jan 30 13:50:07.084611 containerd[1785]: time="2025-01-30T13:50:07.084575398Z" level=info msg="StartContainer for \"95c0fc5e3c0bd9dea02838d655240aa1415f6d7a2c86f05c27656e0d5b857960\"" Jan 30 13:50:07.174434 containerd[1785]: time="2025-01-30T13:50:07.172210274Z" level=info msg="StartContainer for \"95c0fc5e3c0bd9dea02838d655240aa1415f6d7a2c86f05c27656e0d5b857960\" returns successfully" Jan 30 13:50:07.573126 kubelet[3438]: I0130 13:50:07.573089 3438 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:50:07.573126 kubelet[3438]: I0130 13:50:07.573128 3438 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:50:07.798418 kubelet[3438]: I0130 13:50:07.798321 3438 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-pbr7p" podStartSLOduration=32.108579677 podStartE2EDuration="39.798294208s" podCreationTimestamp="2025-01-30 13:49:28 +0000 UTC" firstStartedPulling="2025-01-30 13:49:59.351854848 +0000 UTC m=+54.402122890" lastFinishedPulling="2025-01-30 13:50:07.041569379 +0000 UTC m=+62.091837421" observedRunningTime="2025-01-30 13:50:07.795657346 +0000 UTC m=+62.845925388" watchObservedRunningTime="2025-01-30 13:50:07.798294208 +0000 UTC m=+62.848562350" Jan 30 13:50:07.802005 kubelet[3438]: I0130 13:50:07.801917 3438 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:50:47.580965 systemd[1]: run-containerd-runc-k8s.io-5d0e908e5a3ed1940c4259c85b87a5fe016b250c61b7af7b5fe498c75f61cae6-runc.ji8ZDy.mount: Deactivated successfully. Jan 30 13:51:07.010236 systemd[1]: Started sshd@7-10.200.8.41:22-10.200.16.10:46288.service - OpenSSH per-connection server daemon (10.200.16.10:46288). Jan 30 13:51:07.692373 sshd[6139]: Accepted publickey for core from 10.200.16.10 port 46288 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:07.694770 sshd[6139]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:07.702780 systemd-logind[1763]: New session 10 of user core. Jan 30 13:51:07.708372 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:51:08.235096 sshd[6139]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:08.243354 systemd[1]: sshd@7-10.200.8.41:22-10.200.16.10:46288.service: Deactivated successfully. Jan 30 13:51:08.249164 systemd-logind[1763]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:51:08.249590 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:51:08.253025 systemd-logind[1763]: Removed session 10. Jan 30 13:51:13.352787 systemd[1]: Started sshd@8-10.200.8.41:22-10.200.16.10:46298.service - OpenSSH per-connection server daemon (10.200.16.10:46298). Jan 30 13:51:14.025197 sshd[6173]: Accepted publickey for core from 10.200.16.10 port 46298 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:14.026867 sshd[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:14.031445 systemd-logind[1763]: New session 11 of user core. Jan 30 13:51:14.039276 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:51:14.561094 sshd[6173]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:14.565902 systemd-logind[1763]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:51:14.569900 systemd[1]: sshd@8-10.200.8.41:22-10.200.16.10:46298.service: Deactivated successfully. Jan 30 13:51:14.578839 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:51:14.581479 systemd-logind[1763]: Removed session 11. Jan 30 13:51:19.679258 systemd[1]: Started sshd@9-10.200.8.41:22-10.200.16.10:42736.service - OpenSSH per-connection server daemon (10.200.16.10:42736). Jan 30 13:51:20.350424 sshd[6218]: Accepted publickey for core from 10.200.16.10 port 42736 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:20.352344 sshd[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:20.356661 systemd-logind[1763]: New session 12 of user core. Jan 30 13:51:20.359235 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:51:20.884044 sshd[6218]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:20.887651 systemd[1]: sshd@9-10.200.8.41:22-10.200.16.10:42736.service: Deactivated successfully. Jan 30 13:51:20.894144 systemd-logind[1763]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:51:20.895066 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:51:20.896755 systemd-logind[1763]: Removed session 12. Jan 30 13:51:20.999620 systemd[1]: Started sshd@10-10.200.8.41:22-10.200.16.10:42746.service - OpenSSH per-connection server daemon (10.200.16.10:42746). Jan 30 13:51:21.669751 sshd[6233]: Accepted publickey for core from 10.200.16.10 port 42746 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:21.671425 sshd[6233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:21.675681 systemd-logind[1763]: New session 13 of user core. Jan 30 13:51:21.680472 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:51:22.238324 sshd[6233]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:22.243759 systemd[1]: sshd@10-10.200.8.41:22-10.200.16.10:42746.service: Deactivated successfully. Jan 30 13:51:22.249345 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:51:22.250266 systemd-logind[1763]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:51:22.251339 systemd-logind[1763]: Removed session 13. Jan 30 13:51:22.354265 systemd[1]: Started sshd@11-10.200.8.41:22-10.200.16.10:42750.service - OpenSSH per-connection server daemon (10.200.16.10:42750). Jan 30 13:51:23.023810 sshd[6247]: Accepted publickey for core from 10.200.16.10 port 42750 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:23.025400 sshd[6247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:23.029622 systemd-logind[1763]: New session 14 of user core. Jan 30 13:51:23.034228 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:51:23.559598 sshd[6247]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:23.563796 systemd[1]: sshd@11-10.200.8.41:22-10.200.16.10:42750.service: Deactivated successfully. Jan 30 13:51:23.570067 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:51:23.570911 systemd-logind[1763]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:51:23.571847 systemd-logind[1763]: Removed session 14. Jan 30 13:51:28.675265 systemd[1]: Started sshd@12-10.200.8.41:22-10.200.16.10:36954.service - OpenSSH per-connection server daemon (10.200.16.10:36954). Jan 30 13:51:29.345339 sshd[6261]: Accepted publickey for core from 10.200.16.10 port 36954 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:29.347124 sshd[6261]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:29.351584 systemd-logind[1763]: New session 15 of user core. Jan 30 13:51:29.354549 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:51:29.885742 sshd[6261]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:29.890722 systemd[1]: sshd@12-10.200.8.41:22-10.200.16.10:36954.service: Deactivated successfully. Jan 30 13:51:29.896129 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:51:29.897053 systemd-logind[1763]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:51:29.898040 systemd-logind[1763]: Removed session 15. Jan 30 13:51:35.002262 systemd[1]: Started sshd@13-10.200.8.41:22-10.200.16.10:36962.service - OpenSSH per-connection server daemon (10.200.16.10:36962). Jan 30 13:51:35.671244 sshd[6297]: Accepted publickey for core from 10.200.16.10 port 36962 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:35.673430 sshd[6297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:35.679603 systemd-logind[1763]: New session 16 of user core. Jan 30 13:51:35.685849 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:51:36.222787 sshd[6297]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:36.227194 systemd[1]: sshd@13-10.200.8.41:22-10.200.16.10:36962.service: Deactivated successfully. Jan 30 13:51:36.232439 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:51:36.233498 systemd-logind[1763]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:51:36.234557 systemd-logind[1763]: Removed session 16. Jan 30 13:51:41.342262 systemd[1]: Started sshd@14-10.200.8.41:22-10.200.16.10:35672.service - OpenSSH per-connection server daemon (10.200.16.10:35672). Jan 30 13:51:41.880053 systemd[1]: run-containerd-runc-k8s.io-11f2ea09836a225087751090d74bea5aba6671ffef70bb9060cb32315ef5c317-runc.XvJUg0.mount: Deactivated successfully. Jan 30 13:51:42.014769 sshd[6310]: Accepted publickey for core from 10.200.16.10 port 35672 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:42.016478 sshd[6310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:42.021106 systemd-logind[1763]: New session 17 of user core. Jan 30 13:51:42.026469 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:51:42.555100 sshd[6310]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:42.561084 systemd[1]: sshd@14-10.200.8.41:22-10.200.16.10:35672.service: Deactivated successfully. Jan 30 13:51:42.565461 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:51:42.566505 systemd-logind[1763]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:51:42.567552 systemd-logind[1763]: Removed session 17. Jan 30 13:51:42.670527 systemd[1]: Started sshd@15-10.200.8.41:22-10.200.16.10:35678.service - OpenSSH per-connection server daemon (10.200.16.10:35678). Jan 30 13:51:43.348283 sshd[6344]: Accepted publickey for core from 10.200.16.10 port 35678 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:43.349841 sshd[6344]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:43.354471 systemd-logind[1763]: New session 18 of user core. Jan 30 13:51:43.358409 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:51:43.935321 sshd[6344]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:43.940598 systemd[1]: sshd@15-10.200.8.41:22-10.200.16.10:35678.service: Deactivated successfully. Jan 30 13:51:43.946326 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:51:43.947286 systemd-logind[1763]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:51:43.948327 systemd-logind[1763]: Removed session 18. Jan 30 13:51:44.051253 systemd[1]: Started sshd@16-10.200.8.41:22-10.200.16.10:35686.service - OpenSSH per-connection server daemon (10.200.16.10:35686). Jan 30 13:51:44.551066 update_engine[1768]: I20250130 13:51:44.551010 1768 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 30 13:51:44.551066 update_engine[1768]: I20250130 13:51:44.551062 1768 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 30 13:51:44.551624 update_engine[1768]: I20250130 13:51:44.551280 1768 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 30 13:51:44.551873 update_engine[1768]: I20250130 13:51:44.551822 1768 omaha_request_params.cc:62] Current group set to lts Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552004 1768 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552024 1768 update_attempter.cc:643] Scheduling an action processor start. Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552047 1768 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552084 1768 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552162 1768 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552172 1768 omaha_request_action.cc:272] Request: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: Jan 30 13:51:44.552412 update_engine[1768]: I20250130 13:51:44.552182 1768 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:51:44.553093 locksmithd[1820]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 30 13:51:44.553724 update_engine[1768]: I20250130 13:51:44.553694 1768 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:51:44.554113 update_engine[1768]: I20250130 13:51:44.554080 1768 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:51:44.576974 update_engine[1768]: E20250130 13:51:44.576907 1768 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:51:44.577194 update_engine[1768]: I20250130 13:51:44.577163 1768 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 30 13:51:44.732015 sshd[6356]: Accepted publickey for core from 10.200.16.10 port 35686 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:44.733636 sshd[6356]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:44.738044 systemd-logind[1763]: New session 19 of user core. Jan 30 13:51:44.741229 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:51:47.240492 sshd[6356]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:47.244723 systemd[1]: sshd@16-10.200.8.41:22-10.200.16.10:35686.service: Deactivated successfully. Jan 30 13:51:47.250799 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:51:47.251756 systemd-logind[1763]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:51:47.253475 systemd-logind[1763]: Removed session 19. Jan 30 13:51:47.356582 systemd[1]: Started sshd@17-10.200.8.41:22-10.200.16.10:43704.service - OpenSSH per-connection server daemon (10.200.16.10:43704). Jan 30 13:51:48.034179 sshd[6394]: Accepted publickey for core from 10.200.16.10 port 43704 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:48.035909 sshd[6394]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:48.040255 systemd-logind[1763]: New session 20 of user core. Jan 30 13:51:48.046210 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:51:48.679210 sshd[6394]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:48.685001 systemd[1]: sshd@17-10.200.8.41:22-10.200.16.10:43704.service: Deactivated successfully. Jan 30 13:51:48.689470 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:51:48.689788 systemd-logind[1763]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:51:48.691465 systemd-logind[1763]: Removed session 20. Jan 30 13:51:48.798553 systemd[1]: Started sshd@18-10.200.8.41:22-10.200.16.10:43718.service - OpenSSH per-connection server daemon (10.200.16.10:43718). Jan 30 13:51:49.470585 sshd[6428]: Accepted publickey for core from 10.200.16.10 port 43718 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:49.472354 sshd[6428]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:49.477637 systemd-logind[1763]: New session 21 of user core. Jan 30 13:51:49.482268 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:51:50.008195 sshd[6428]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:50.014939 systemd[1]: sshd@18-10.200.8.41:22-10.200.16.10:43718.service: Deactivated successfully. Jan 30 13:51:50.023623 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:51:50.025355 systemd-logind[1763]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:51:50.027320 systemd-logind[1763]: Removed session 21. Jan 30 13:51:54.555528 update_engine[1768]: I20250130 13:51:54.555351 1768 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:51:54.556541 update_engine[1768]: I20250130 13:51:54.556013 1768 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:51:54.556541 update_engine[1768]: I20250130 13:51:54.556368 1768 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:51:54.572316 update_engine[1768]: E20250130 13:51:54.572269 1768 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:51:54.572450 update_engine[1768]: I20250130 13:51:54.572347 1768 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 30 13:51:55.123270 systemd[1]: Started sshd@19-10.200.8.41:22-10.200.16.10:43724.service - OpenSSH per-connection server daemon (10.200.16.10:43724). Jan 30 13:51:55.795220 sshd[6447]: Accepted publickey for core from 10.200.16.10 port 43724 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:51:55.796879 sshd[6447]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:51:55.801704 systemd-logind[1763]: New session 22 of user core. Jan 30 13:51:55.805416 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:51:56.342859 sshd[6447]: pam_unix(sshd:session): session closed for user core Jan 30 13:51:56.347479 systemd[1]: sshd@19-10.200.8.41:22-10.200.16.10:43724.service: Deactivated successfully. Jan 30 13:51:56.352459 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:51:56.353430 systemd-logind[1763]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:51:56.354487 systemd-logind[1763]: Removed session 22. Jan 30 13:52:01.459603 systemd[1]: Started sshd@20-10.200.8.41:22-10.200.16.10:49572.service - OpenSSH per-connection server daemon (10.200.16.10:49572). Jan 30 13:52:02.128792 sshd[6461]: Accepted publickey for core from 10.200.16.10 port 49572 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:52:02.130430 sshd[6461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:02.134596 systemd-logind[1763]: New session 23 of user core. Jan 30 13:52:02.145395 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:52:02.703187 sshd[6461]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:02.706746 systemd[1]: sshd@20-10.200.8.41:22-10.200.16.10:49572.service: Deactivated successfully. Jan 30 13:52:02.712099 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:52:02.713518 systemd-logind[1763]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:52:02.714848 systemd-logind[1763]: Removed session 23. Jan 30 13:52:04.551093 update_engine[1768]: I20250130 13:52:04.551000 1768 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:52:04.551725 update_engine[1768]: I20250130 13:52:04.551366 1768 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:52:04.551725 update_engine[1768]: I20250130 13:52:04.551694 1768 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:52:04.556816 update_engine[1768]: E20250130 13:52:04.556769 1768 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:52:04.556938 update_engine[1768]: I20250130 13:52:04.556852 1768 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 30 13:52:07.823725 systemd[1]: Started sshd@21-10.200.8.41:22-10.200.16.10:42386.service - OpenSSH per-connection server daemon (10.200.16.10:42386). Jan 30 13:52:08.497614 sshd[6477]: Accepted publickey for core from 10.200.16.10 port 42386 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:52:08.499527 sshd[6477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:08.504173 systemd-logind[1763]: New session 24 of user core. Jan 30 13:52:08.508270 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:52:09.032188 sshd[6477]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:09.036014 systemd[1]: sshd@21-10.200.8.41:22-10.200.16.10:42386.service: Deactivated successfully. Jan 30 13:52:09.042326 systemd-logind[1763]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:52:09.043484 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:52:09.045810 systemd-logind[1763]: Removed session 24. Jan 30 13:52:14.151307 systemd[1]: Started sshd@22-10.200.8.41:22-10.200.16.10:42394.service - OpenSSH per-connection server daemon (10.200.16.10:42394). Jan 30 13:52:14.555718 update_engine[1768]: I20250130 13:52:14.555627 1768 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:52:14.556409 update_engine[1768]: I20250130 13:52:14.556010 1768 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:52:14.556409 update_engine[1768]: I20250130 13:52:14.556344 1768 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:52:14.593999 update_engine[1768]: E20250130 13:52:14.593905 1768 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:52:14.594212 update_engine[1768]: I20250130 13:52:14.594038 1768 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:52:14.594212 update_engine[1768]: I20250130 13:52:14.594058 1768 omaha_request_action.cc:617] Omaha request response: Jan 30 13:52:14.594212 update_engine[1768]: E20250130 13:52:14.594159 1768 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 30 13:52:14.594212 update_engine[1768]: I20250130 13:52:14.594192 1768 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 30 13:52:14.594212 update_engine[1768]: I20250130 13:52:14.594201 1768 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:52:14.594212 update_engine[1768]: I20250130 13:52:14.594210 1768 update_attempter.cc:306] Processing Done. Jan 30 13:52:14.594496 update_engine[1768]: E20250130 13:52:14.594233 1768 update_attempter.cc:619] Update failed. Jan 30 13:52:14.594496 update_engine[1768]: I20250130 13:52:14.594246 1768 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 30 13:52:14.594496 update_engine[1768]: I20250130 13:52:14.594256 1768 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 30 13:52:14.594496 update_engine[1768]: I20250130 13:52:14.594266 1768 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 30 13:52:14.594781 locksmithd[1820]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 30 13:52:14.595408 update_engine[1768]: I20250130 13:52:14.594764 1768 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 30 13:52:14.595408 update_engine[1768]: I20250130 13:52:14.594816 1768 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 30 13:52:14.595408 update_engine[1768]: I20250130 13:52:14.594828 1768 omaha_request_action.cc:272] Request: Jan 30 13:52:14.595408 update_engine[1768]: Jan 30 13:52:14.595408 update_engine[1768]: Jan 30 13:52:14.595408 update_engine[1768]: Jan 30 13:52:14.595408 update_engine[1768]: Jan 30 13:52:14.595408 update_engine[1768]: Jan 30 13:52:14.595408 update_engine[1768]: Jan 30 13:52:14.595408 update_engine[1768]: I20250130 13:52:14.594838 1768 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 30 13:52:14.595408 update_engine[1768]: I20250130 13:52:14.595145 1768 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 30 13:52:14.595830 update_engine[1768]: I20250130 13:52:14.595444 1768 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 30 13:52:14.758471 update_engine[1768]: E20250130 13:52:14.758395 1768 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758497 1768 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758509 1768 omaha_request_action.cc:617] Omaha request response: Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758519 1768 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758531 1768 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758538 1768 update_attempter.cc:306] Processing Done. Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758548 1768 update_attempter.cc:310] Error event sent. Jan 30 13:52:14.758658 update_engine[1768]: I20250130 13:52:14.758561 1768 update_check_scheduler.cc:74] Next update check in 42m20s Jan 30 13:52:14.759064 locksmithd[1820]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 30 13:52:14.823585 sshd[6510]: Accepted publickey for core from 10.200.16.10 port 42394 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:52:14.825775 sshd[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:14.830453 systemd-logind[1763]: New session 25 of user core. Jan 30 13:52:14.837415 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:52:15.351896 sshd[6510]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:15.356273 systemd[1]: sshd@22-10.200.8.41:22-10.200.16.10:42394.service: Deactivated successfully. Jan 30 13:52:15.360778 systemd-logind[1763]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:52:15.361119 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:52:15.363058 systemd-logind[1763]: Removed session 25. Jan 30 13:52:20.468368 systemd[1]: Started sshd@23-10.200.8.41:22-10.200.16.10:34522.service - OpenSSH per-connection server daemon (10.200.16.10:34522). Jan 30 13:52:21.138624 sshd[6545]: Accepted publickey for core from 10.200.16.10 port 34522 ssh2: RSA SHA256:m+fyOXLT1xQI0zPWq4mPqcO4MTs92PViZDziGEyvpSc Jan 30 13:52:21.140522 sshd[6545]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:52:21.145341 systemd-logind[1763]: New session 26 of user core. Jan 30 13:52:21.153209 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:52:21.668711 sshd[6545]: pam_unix(sshd:session): session closed for user core Jan 30 13:52:21.672445 systemd[1]: sshd@23-10.200.8.41:22-10.200.16.10:34522.service: Deactivated successfully. Jan 30 13:52:21.678248 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:52:21.679524 systemd-logind[1763]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:52:21.680470 systemd-logind[1763]: Removed session 26.