Jan 14 14:34:01.048302 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 14 14:34:01.048334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:34:01.048345 kernel: BIOS-provided physical RAM map: Jan 14 14:34:01.048352 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 14:34:01.048360 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 14:34:01.048366 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 14:34:01.048377 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 14 14:34:01.048386 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 14 14:34:01.048394 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 14:34:01.048401 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 14:34:01.048408 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 14:34:01.048417 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 14:34:01.048423 kernel: printk: bootconsole [earlyser0] enabled Jan 14 14:34:01.048430 kernel: NX (Execute Disable) protection: active Jan 14 14:34:01.048443 kernel: APIC: Static calls initialized Jan 14 14:34:01.048450 kernel: efi: EFI v2.7 by Microsoft Jan 14 14:34:01.048461 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 14 14:34:01.048467 kernel: SMBIOS 3.1.0 present. Jan 14 14:34:01.048478 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 14:34:01.048485 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 14:34:01.048494 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 14:34:01.048502 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 14:34:01.048510 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 14:34:01.048519 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 14:34:01.048529 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 14:34:01.048539 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:34:01.048550 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:34:01.048558 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 14:34:01.048568 kernel: tsc: Detected 2593.906 MHz processor Jan 14 14:34:01.048575 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 14:34:01.048584 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 14:34:01.048593 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 14:34:01.048600 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 14:34:01.048612 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 14:34:01.048619 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 14:34:01.048629 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 14:34:01.048636 kernel: Using GB pages for direct mapping Jan 14 14:34:01.048648 kernel: Secure boot disabled Jan 14 14:34:01.048655 kernel: ACPI: Early table checksum verification disabled Jan 14 14:34:01.048664 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 14:34:01.048676 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048689 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048696 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 14:34:01.048707 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 14:34:01.048715 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048724 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048734 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048743 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048752 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048762 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048770 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048780 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 14:34:01.048788 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 14:34:01.048797 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 14:34:01.048806 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 14:34:01.048818 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 14:34:01.048827 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 14:34:01.048836 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 14:34:01.048845 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 14:34:01.048853 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 14:34:01.048863 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 14:34:01.048871 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 14:34:01.048881 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 14:34:01.048891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 14:34:01.048902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 14:34:01.048911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 14:34:01.048920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 14:34:01.048928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 14:34:01.048939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 14:34:01.048947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 14:34:01.048956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 14:34:01.048965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 14:34:01.048973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 14:34:01.048985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 14:34:01.048993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 14:34:01.049003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 14:34:01.049013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 14:34:01.049022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 14:34:01.049032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 14:34:01.049045 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 14:34:01.049055 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 14:34:01.049063 kernel: Zone ranges: Jan 14 14:34:01.049074 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 14:34:01.049081 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 14:34:01.049089 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:34:01.049099 kernel: Movable zone start for each node Jan 14 14:34:01.049112 kernel: Early memory node ranges Jan 14 14:34:01.049124 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 14:34:01.049132 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 14:34:01.049143 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 14:34:01.049162 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:34:01.049179 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 14:34:01.049200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 14:34:01.049218 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 14:34:01.049231 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 14:34:01.049239 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 14:34:01.049251 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 14:34:01.049269 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 14:34:01.049284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 14:34:01.049298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 14:34:01.049310 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 14:34:01.049320 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 14:34:01.049334 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 14:34:01.049349 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 14:34:01.049362 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 14:34:01.049370 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 14:34:01.049384 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 14:34:01.049398 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 14:34:01.049405 kernel: pcpu-alloc: [0] 0 1 Jan 14 14:34:01.049422 kernel: Hyper-V: PV spinlocks enabled Jan 14 14:34:01.049438 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 14:34:01.049452 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:34:01.049460 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 14:34:01.049467 kernel: random: crng init done Jan 14 14:34:01.049474 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 14:34:01.049490 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 14:34:01.049506 kernel: Fallback order for Node 0: 0 Jan 14 14:34:01.049523 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 14:34:01.049540 kernel: Policy zone: Normal Jan 14 14:34:01.049559 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 14:34:01.049580 kernel: software IO TLB: area num 2. Jan 14 14:34:01.049591 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Jan 14 14:34:01.049599 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 14:34:01.049615 kernel: ftrace: allocating 37918 entries in 149 pages Jan 14 14:34:01.049630 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 14:34:01.049641 kernel: Dynamic Preempt: voluntary Jan 14 14:34:01.049649 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 14:34:01.049671 kernel: rcu: RCU event tracing is enabled. Jan 14 14:34:01.049686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 14:34:01.049695 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 14:34:01.049711 kernel: Rude variant of Tasks RCU enabled. Jan 14 14:34:01.049727 kernel: Tracing variant of Tasks RCU enabled. Jan 14 14:34:01.049741 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 14:34:01.049751 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 14:34:01.049768 kernel: Using NULL legacy PIC Jan 14 14:34:01.049786 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 14:34:01.049796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 14:34:01.049804 kernel: Console: colour dummy device 80x25 Jan 14 14:34:01.049821 kernel: printk: console [tty1] enabled Jan 14 14:34:01.049833 kernel: printk: console [ttyS0] enabled Jan 14 14:34:01.049841 kernel: printk: bootconsole [earlyser0] disabled Jan 14 14:34:01.049858 kernel: ACPI: Core revision 20230628 Jan 14 14:34:01.049869 kernel: Failed to register legacy timer interrupt Jan 14 14:34:01.049881 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 14:34:01.049899 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 14:34:01.049913 kernel: Hyper-V: Using IPI hypercalls Jan 14 14:34:01.049921 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 14:34:01.049929 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 14:34:01.049946 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 14:34:01.049960 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 14:34:01.049969 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 14:34:01.049977 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 14:34:01.050001 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 14:34:01.050014 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 14:34:01.050022 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 14:34:01.050035 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 14:34:01.050050 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 14:34:01.050061 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 14:34:01.050069 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 14:34:01.050077 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 14:34:01.050085 kernel: RETBleed: Vulnerable Jan 14 14:34:01.050095 kernel: Speculative Store Bypass: Vulnerable Jan 14 14:34:01.050103 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:34:01.050111 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:34:01.050119 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 14:34:01.050126 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 14:34:01.050134 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 14:34:01.050142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 14:34:01.050150 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 14:34:01.050158 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 14:34:01.050166 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 14:34:01.050174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 14:34:01.050192 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 14:34:01.050210 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 14:34:01.050225 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 14:34:01.050234 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 14:34:01.050241 kernel: Freeing SMP alternatives memory: 32K Jan 14 14:34:01.050255 kernel: pid_max: default: 32768 minimum: 301 Jan 14 14:34:01.050273 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 14:34:01.050290 kernel: landlock: Up and running. Jan 14 14:34:01.050299 kernel: SELinux: Initializing. Jan 14 14:34:01.050308 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.050323 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.050338 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 14:34:01.050349 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:34:01.050362 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:34:01.050378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:34:01.050388 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 14:34:01.050396 kernel: signal: max sigframe size: 3632 Jan 14 14:34:01.050416 kernel: rcu: Hierarchical SRCU implementation. Jan 14 14:34:01.050432 kernel: rcu: Max phase no-delay instances is 400. Jan 14 14:34:01.050442 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 14:34:01.050452 kernel: smp: Bringing up secondary CPUs ... Jan 14 14:34:01.050472 kernel: smpboot: x86: Booting SMP configuration: Jan 14 14:34:01.050484 kernel: .... node #0, CPUs: #1 Jan 14 14:34:01.050498 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 14:34:01.050511 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 14:34:01.050535 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 14:34:01.050551 kernel: smpboot: Max logical packages: 1 Jan 14 14:34:01.050566 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 14:34:01.050583 kernel: devtmpfs: initialized Jan 14 14:34:01.050603 kernel: x86/mm: Memory block size: 128MB Jan 14 14:34:01.050620 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 14:34:01.050635 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 14:34:01.050651 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 14:34:01.050667 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 14:34:01.050684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 14:34:01.050700 kernel: audit: initializing netlink subsys (disabled) Jan 14 14:34:01.050716 kernel: audit: type=2000 audit(1736865240.027:1): state=initialized audit_enabled=0 res=1 Jan 14 14:34:01.050732 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 14:34:01.050752 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 14:34:01.050768 kernel: cpuidle: using governor menu Jan 14 14:34:01.050784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 14:34:01.050800 kernel: dca service started, version 1.12.1 Jan 14 14:34:01.050815 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 14:34:01.050832 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 14:34:01.050849 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 14:34:01.050864 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 14:34:01.050879 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 14:34:01.050900 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 14:34:01.050916 kernel: ACPI: Added _OSI(Module Device) Jan 14 14:34:01.050933 kernel: ACPI: Added _OSI(Processor Device) Jan 14 14:34:01.050950 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 14:34:01.050965 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 14:34:01.050982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 14:34:01.050998 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 14:34:01.051013 kernel: ACPI: Interpreter enabled Jan 14 14:34:01.051029 kernel: ACPI: PM: (supports S0 S5) Jan 14 14:34:01.051050 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 14:34:01.051066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 14:34:01.051082 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 14:34:01.051098 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 14:34:01.051114 kernel: iommu: Default domain type: Translated Jan 14 14:34:01.051129 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 14:34:01.051146 kernel: efivars: Registered efivars operations Jan 14 14:34:01.051162 kernel: PCI: Using ACPI for IRQ routing Jan 14 14:34:01.051177 kernel: PCI: System does not support PCI Jan 14 14:34:01.051209 kernel: vgaarb: loaded Jan 14 14:34:01.051227 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 14:34:01.051242 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 14:34:01.051258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 14:34:01.051273 kernel: pnp: PnP ACPI init Jan 14 14:34:01.051289 kernel: pnp: PnP ACPI: found 3 devices Jan 14 14:34:01.051307 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 14:34:01.051322 kernel: NET: Registered PF_INET protocol family Jan 14 14:34:01.051338 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 14:34:01.051359 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 14:34:01.051375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 14:34:01.051392 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 14:34:01.051406 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 14:34:01.051420 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 14:34:01.051435 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.051450 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.051466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 14:34:01.051480 kernel: NET: Registered PF_XDP protocol family Jan 14 14:34:01.051499 kernel: PCI: CLS 0 bytes, default 64 Jan 14 14:34:01.051516 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 14:34:01.051532 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 14:34:01.051547 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 14:34:01.051559 kernel: Initialise system trusted keyrings Jan 14 14:34:01.051568 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 14:34:01.051578 kernel: Key type asymmetric registered Jan 14 14:34:01.051590 kernel: Asymmetric key parser 'x509' registered Jan 14 14:34:01.051604 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 14:34:01.051621 kernel: io scheduler mq-deadline registered Jan 14 14:34:01.051636 kernel: io scheduler kyber registered Jan 14 14:34:01.051650 kernel: io scheduler bfq registered Jan 14 14:34:01.051665 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 14:34:01.051678 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 14:34:01.051693 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 14:34:01.051707 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 14:34:01.051720 kernel: i8042: PNP: No PS/2 controller found. Jan 14 14:34:01.051911 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 14:34:01.052044 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T14:34:00 UTC (1736865240) Jan 14 14:34:01.052160 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 14:34:01.052176 kernel: intel_pstate: CPU model not supported Jan 14 14:34:01.052215 kernel: efifb: probing for efifb Jan 14 14:34:01.052228 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 14:34:01.052241 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 14:34:01.052256 kernel: efifb: scrolling: redraw Jan 14 14:34:01.052275 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 14:34:01.052290 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 14:34:01.052304 kernel: fb0: EFI VGA frame buffer device Jan 14 14:34:01.052318 kernel: pstore: Using crash dump compression: deflate Jan 14 14:34:01.052334 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 14:34:01.052348 kernel: NET: Registered PF_INET6 protocol family Jan 14 14:34:01.052363 kernel: Segment Routing with IPv6 Jan 14 14:34:01.052377 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 14:34:01.052391 kernel: NET: Registered PF_PACKET protocol family Jan 14 14:34:01.052406 kernel: Key type dns_resolver registered Jan 14 14:34:01.052422 kernel: IPI shorthand broadcast: enabled Jan 14 14:34:01.052436 kernel: sched_clock: Marking stable (756008600, 39437000)->(971039900, -175594300) Jan 14 14:34:01.052450 kernel: registered taskstats version 1 Jan 14 14:34:01.052464 kernel: Loading compiled-in X.509 certificates Jan 14 14:34:01.052479 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 14 14:34:01.052493 kernel: Key type .fscrypt registered Jan 14 14:34:01.052506 kernel: Key type fscrypt-provisioning registered Jan 14 14:34:01.052521 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 14:34:01.052538 kernel: ima: Allocated hash algorithm: sha1 Jan 14 14:34:01.052552 kernel: ima: No architecture policies found Jan 14 14:34:01.052566 kernel: clk: Disabling unused clocks Jan 14 14:34:01.052579 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 14 14:34:01.052594 kernel: Write protecting the kernel read-only data: 36864k Jan 14 14:34:01.052608 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 14 14:34:01.052622 kernel: Run /init as init process Jan 14 14:34:01.052637 kernel: with arguments: Jan 14 14:34:01.052651 kernel: /init Jan 14 14:34:01.052664 kernel: with environment: Jan 14 14:34:01.052681 kernel: HOME=/ Jan 14 14:34:01.052694 kernel: TERM=linux Jan 14 14:34:01.052708 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 14:34:01.052725 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 14:34:01.052742 systemd[1]: Detected virtualization microsoft. Jan 14 14:34:01.052757 systemd[1]: Detected architecture x86-64. Jan 14 14:34:01.052771 systemd[1]: Running in initrd. Jan 14 14:34:01.052788 systemd[1]: No hostname configured, using default hostname. Jan 14 14:34:01.052802 systemd[1]: Hostname set to . Jan 14 14:34:01.052817 systemd[1]: Initializing machine ID from random generator. Jan 14 14:34:01.052832 systemd[1]: Queued start job for default target initrd.target. Jan 14 14:34:01.052847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:34:01.052862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:34:01.052878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 14:34:01.052893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 14:34:01.052909 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 14:34:01.052924 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 14:34:01.052942 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 14:34:01.052958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 14:34:01.052975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:34:01.052994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:34:01.053007 systemd[1]: Reached target paths.target - Path Units. Jan 14 14:34:01.053023 systemd[1]: Reached target slices.target - Slice Units. Jan 14 14:34:01.053038 systemd[1]: Reached target swap.target - Swaps. Jan 14 14:34:01.053051 systemd[1]: Reached target timers.target - Timer Units. Jan 14 14:34:01.053064 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:34:01.053078 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:34:01.053092 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 14:34:01.053105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 14:34:01.053118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:34:01.053136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 14:34:01.053150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:34:01.053164 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 14:34:01.053179 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 14:34:01.053204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 14:34:01.053218 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 14:34:01.053230 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 14:34:01.053239 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 14:34:01.056229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 14:34:01.056258 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:01.056275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 14:34:01.056319 systemd-journald[176]: Collecting audit messages is disabled. Jan 14 14:34:01.056355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:34:01.056374 systemd-journald[176]: Journal started Jan 14 14:34:01.056421 systemd-journald[176]: Runtime Journal (/run/log/journal/75e83cb9b3f944b2a84b35866f111009) is 8.0M, max 158.8M, 150.8M free. Jan 14 14:34:01.062083 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 14:34:01.061947 systemd-modules-load[177]: Inserted module 'overlay' Jan 14 14:34:01.064660 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 14:34:01.068521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:01.083473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:34:01.090632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 14:34:01.099390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 14:34:01.116209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 14:34:01.121204 kernel: Bridge firewalling registered Jan 14 14:34:01.121388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:01.129394 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 14:34:01.134313 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 14 14:34:01.135989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 14:34:01.141657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:34:01.149460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:34:01.162494 dracut-cmdline[201]: dracut-dracut-053 Jan 14 14:34:01.166076 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:34:01.166044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:34:01.171344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 14:34:01.200404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:34:01.214640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 14:34:01.218241 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:34:01.266041 systemd-resolved[252]: Positive Trust Anchors: Jan 14 14:34:01.266066 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 14:34:01.275381 kernel: SCSI subsystem initialized Jan 14 14:34:01.266119 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 14:34:01.291734 systemd-resolved[252]: Defaulting to hostname 'linux'. Jan 14 14:34:01.295152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 14:34:01.304261 kernel: Loading iSCSI transport class v2.0-870. Jan 14 14:34:01.301527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:34:01.314204 kernel: iscsi: registered transport (tcp) Jan 14 14:34:01.335408 kernel: iscsi: registered transport (qla4xxx) Jan 14 14:34:01.335485 kernel: QLogic iSCSI HBA Driver Jan 14 14:34:01.371336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 14:34:01.381332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 14:34:01.407487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 14:34:01.407576 kernel: device-mapper: uevent: version 1.0.3 Jan 14 14:34:01.411206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 14:34:01.451217 kernel: raid6: avx512x4 gen() 18996 MB/s Jan 14 14:34:01.470202 kernel: raid6: avx512x2 gen() 18755 MB/s Jan 14 14:34:01.489195 kernel: raid6: avx512x1 gen() 17972 MB/s Jan 14 14:34:01.508201 kernel: raid6: avx2x4 gen() 18698 MB/s Jan 14 14:34:01.527200 kernel: raid6: avx2x2 gen() 18676 MB/s Jan 14 14:34:01.547211 kernel: raid6: avx2x1 gen() 14074 MB/s Jan 14 14:34:01.547243 kernel: raid6: using algorithm avx512x4 gen() 18996 MB/s Jan 14 14:34:01.567719 kernel: raid6: .... xor() 7263 MB/s, rmw enabled Jan 14 14:34:01.567759 kernel: raid6: using avx512x2 recovery algorithm Jan 14 14:34:01.590213 kernel: xor: automatically using best checksumming function avx Jan 14 14:34:01.736215 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 14:34:01.746046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:34:01.753483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:34:01.770010 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 14:34:01.776401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:34:01.789359 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 14:34:01.801301 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 14 14:34:01.828361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:34:01.837334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 14:34:01.876055 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:34:01.890345 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 14:34:01.907018 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 14:34:01.913372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:34:01.919780 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:34:01.929290 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 14:34:01.939366 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 14:34:01.953205 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 14:34:01.962474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:34:01.983287 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 14:34:01.987437 kernel: AES CTR mode by8 optimization enabled Jan 14 14:34:01.996333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:34:01.996637 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:02.005239 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:34:02.008099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:02.008824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:02.025545 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 14:34:02.016417 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:02.035437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:02.052560 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 14:34:02.052598 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 14:34:02.055247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:02.058275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:02.068451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:02.076682 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 14:34:02.083042 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 14:34:02.083073 kernel: scsi host0: storvsc_host_t Jan 14 14:34:02.083255 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 14:34:02.083271 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 14:34:02.090792 kernel: PTP clock support registered Jan 14 14:34:02.094533 kernel: scsi host1: storvsc_host_t Jan 14 14:34:02.099204 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 14:34:02.106940 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 14:34:02.106978 kernel: hv_vmbus: registering driver hv_utils Jan 14 14:34:02.115122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:03.026101 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 14:34:03.026128 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 14:34:03.026146 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 14:34:03.026164 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 14:34:03.023239 systemd-resolved[252]: Clock change detected. Flushing caches. Jan 14 14:34:03.032142 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 14:34:03.039730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:34:03.049481 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 14:34:03.054498 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 14:34:03.054529 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 14:34:03.074433 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 14:34:03.081657 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 14:34:03.081681 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 14:34:03.091258 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 14:34:03.091453 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 14:34:03.091719 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 14:34:03.091902 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 14:34:03.092063 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 14:34:03.092239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:34:03.092259 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 14:34:03.085160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:03.203485 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: VF slot 1 added Jan 14 14:34:03.213570 kernel: hv_vmbus: registering driver hv_pci Jan 14 14:34:03.218489 kernel: hv_pci 85531272-e1f4-4dca-97e9-2f49a41529da: PCI VMBus probing: Using version 0x10004 Jan 14 14:34:03.258456 kernel: hv_pci 85531272-e1f4-4dca-97e9-2f49a41529da: PCI host bridge to bus e1f4:00 Jan 14 14:34:03.258934 kernel: pci_bus e1f4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 14:34:03.259119 kernel: pci_bus e1f4:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 14:34:03.259277 kernel: pci e1f4:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 14:34:03.259488 kernel: pci e1f4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:34:03.259674 kernel: pci e1f4:00:02.0: enabling Extended Tags Jan 14 14:34:03.259851 kernel: pci e1f4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e1f4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 14:34:03.260040 kernel: pci_bus e1f4:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 14:34:03.260190 kernel: pci e1f4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:34:03.247565 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 14:34:03.280767 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (440) Jan 14 14:34:03.287500 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (465) Jan 14 14:34:03.305001 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 14:34:03.337063 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 14:34:03.351633 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 14:34:03.358795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 14:34:03.373679 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 14:34:03.470088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:34:03.476280 kernel: mlx5_core e1f4:00:02.0: enabling device (0000 -> 0002) Jan 14 14:34:03.720889 kernel: mlx5_core e1f4:00:02.0: firmware version: 14.30.5000 Jan 14 14:34:03.721115 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: VF registering: eth1 Jan 14 14:34:03.721277 kernel: mlx5_core e1f4:00:02.0 eth1: joined to eth0 Jan 14 14:34:03.721452 kernel: mlx5_core e1f4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 14:34:03.727484 kernel: mlx5_core e1f4:00:02.0 enP57844s1: renamed from eth1 Jan 14 14:34:04.486298 disk-uuid[594]: The operation has completed successfully. Jan 14 14:34:04.489847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:34:04.571831 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 14:34:04.571957 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 14:34:04.592627 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 14:34:04.598086 sh[689]: Success Jan 14 14:34:04.618584 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 14:34:04.693784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 14:34:04.706588 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 14:34:04.711568 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 14:34:04.738783 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 14 14:34:04.738851 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:04.742133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 14:34:04.744636 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 14:34:04.746845 kernel: BTRFS info (device dm-0): using free space tree Jan 14 14:34:04.839508 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 14:34:04.845105 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 14:34:04.860633 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 14:34:04.864950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 14:34:04.881476 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:04.881516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:04.885991 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:34:04.895496 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:34:04.910096 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:04.909674 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 14:34:04.921089 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 14:34:04.932686 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 14:34:04.962888 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:34:04.971879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 14:34:04.991618 systemd-networkd[873]: lo: Link UP Jan 14 14:34:04.991627 systemd-networkd[873]: lo: Gained carrier Jan 14 14:34:04.993698 systemd-networkd[873]: Enumeration completed Jan 14 14:34:04.993952 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 14:34:05.001839 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:05.001844 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:34:05.002293 systemd[1]: Reached target network.target - Network. Jan 14 14:34:05.071497 kernel: mlx5_core e1f4:00:02.0 enP57844s1: Link up Jan 14 14:34:05.106508 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: Data path switched to VF: enP57844s1 Jan 14 14:34:05.106975 systemd-networkd[873]: enP57844s1: Link UP Jan 14 14:34:05.107098 systemd-networkd[873]: eth0: Link UP Jan 14 14:34:05.107284 systemd-networkd[873]: eth0: Gained carrier Jan 14 14:34:05.107298 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:05.111252 systemd-networkd[873]: enP57844s1: Gained carrier Jan 14 14:34:05.147621 systemd-networkd[873]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:34:05.213868 ignition[820]: Ignition 2.19.0 Jan 14 14:34:05.213882 ignition[820]: Stage: fetch-offline Jan 14 14:34:05.215732 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:34:05.213930 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.213940 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.214064 ignition[820]: parsed url from cmdline: "" Jan 14 14:34:05.214069 ignition[820]: no config URL provided Jan 14 14:34:05.214077 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:34:05.227542 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 14:34:05.214087 ignition[820]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:34:05.214094 ignition[820]: failed to fetch config: resource requires networking Jan 14 14:34:05.214319 ignition[820]: Ignition finished successfully Jan 14 14:34:05.260591 ignition[882]: Ignition 2.19.0 Jan 14 14:34:05.260603 ignition[882]: Stage: fetch Jan 14 14:34:05.260825 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.260842 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.260969 ignition[882]: parsed url from cmdline: "" Jan 14 14:34:05.260974 ignition[882]: no config URL provided Jan 14 14:34:05.260981 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:34:05.260989 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:34:05.261009 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 14:34:05.357813 ignition[882]: GET result: OK Jan 14 14:34:05.357934 ignition[882]: config has been read from IMDS userdata Jan 14 14:34:05.357965 ignition[882]: parsing config with SHA512: abef57d44710a2f2fee4673ea91c306ffc1a3965e10143e11f062574615b7b1dd7da352d4a0f6c2a9d14ed6cd6f004e2fd6521a77536f26f0433e7c19e0c16ed Jan 14 14:34:05.363633 unknown[882]: fetched base config from "system" Jan 14 14:34:05.363648 unknown[882]: fetched base config from "system" Jan 14 14:34:05.364161 ignition[882]: fetch: fetch complete Jan 14 14:34:05.363661 unknown[882]: fetched user config from "azure" Jan 14 14:34:05.364167 ignition[882]: fetch: fetch passed Jan 14 14:34:05.365937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 14:34:05.364217 ignition[882]: Ignition finished successfully Jan 14 14:34:05.378645 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 14:34:05.394938 ignition[888]: Ignition 2.19.0 Jan 14 14:34:05.394948 ignition[888]: Stage: kargs Jan 14 14:34:05.395191 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.398129 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 14:34:05.395205 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.396074 ignition[888]: kargs: kargs passed Jan 14 14:34:05.396120 ignition[888]: Ignition finished successfully Jan 14 14:34:05.413653 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 14:34:05.430353 ignition[895]: Ignition 2.19.0 Jan 14 14:34:05.430364 ignition[895]: Stage: disks Jan 14 14:34:05.430622 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.430638 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.431481 ignition[895]: disks: disks passed Jan 14 14:34:05.431532 ignition[895]: Ignition finished successfully Jan 14 14:34:05.441755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 14:34:05.444173 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 14:34:05.448967 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 14:34:05.451905 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 14:34:05.456616 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 14:34:05.461335 systemd[1]: Reached target basic.target - Basic System. Jan 14 14:34:05.476617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 14:34:05.510420 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 14:34:05.514562 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 14:34:05.527553 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 14:34:05.616826 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 14 14:34:05.617481 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 14:34:05.621652 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 14:34:05.642569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:34:05.646390 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 14:34:05.654633 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 14:34:05.669754 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (914) Jan 14 14:34:05.669791 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:05.669813 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:05.669833 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:34:05.660297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 14:34:05.660329 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:34:05.681823 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:34:05.684624 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:34:05.691014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 14:34:05.695690 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 14:34:05.862144 coreos-metadata[916]: Jan 14 14:34:05.862 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 14:34:05.866000 coreos-metadata[916]: Jan 14 14:34:05.865 INFO Fetch successful Jan 14 14:34:05.866000 coreos-metadata[916]: Jan 14 14:34:05.865 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 14:34:05.880430 coreos-metadata[916]: Jan 14 14:34:05.880 INFO Fetch successful Jan 14 14:34:05.883903 coreos-metadata[916]: Jan 14 14:34:05.883 INFO wrote hostname ci-4081.3.0-a-0bb245c6fa to /sysroot/etc/hostname Jan 14 14:34:05.886176 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:34:05.900049 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 14:34:05.910613 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Jan 14 14:34:05.918554 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 14:34:05.923799 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 14:34:06.188273 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 14:34:06.197685 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 14:34:06.205691 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 14:34:06.214488 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:06.208991 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 14:34:06.238158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 14:34:06.247436 ignition[1035]: INFO : Ignition 2.19.0 Jan 14 14:34:06.247436 ignition[1035]: INFO : Stage: mount Jan 14 14:34:06.251311 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:06.251311 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:06.251311 ignition[1035]: INFO : mount: mount passed Jan 14 14:34:06.251311 ignition[1035]: INFO : Ignition finished successfully Jan 14 14:34:06.261952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 14:34:06.270579 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 14:34:06.279223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:34:06.295484 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1046) Jan 14 14:34:06.299478 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:06.299519 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:06.303510 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:34:06.308778 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:34:06.310236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:34:06.330451 ignition[1063]: INFO : Ignition 2.19.0 Jan 14 14:34:06.330451 ignition[1063]: INFO : Stage: files Jan 14 14:34:06.334174 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:06.334174 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:06.334174 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 14 14:34:06.341767 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 14:34:06.341767 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 14:34:06.363769 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 14:34:06.367364 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 14:34:06.370895 unknown[1063]: wrote ssh authorized keys file for user: core Jan 14 14:34:06.373372 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 14:34:06.376849 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:34:06.381276 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 14:34:06.465712 systemd-networkd[873]: enP57844s1: Gained IPv6LL Jan 14 14:34:06.466084 systemd-networkd[873]: eth0: Gained IPv6LL Jan 14 14:34:06.718180 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 14:34:07.135617 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:34:07.135617 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:34:07.160767 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:34:07.164538 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:34:07.168447 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:34:07.172760 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:34:07.176758 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 14:34:07.612140 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 14:34:07.970065 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.970065 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: files passed Jan 14 14:34:07.981372 ignition[1063]: INFO : Ignition finished successfully Jan 14 14:34:07.976423 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 14:34:08.018795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 14:34:08.028179 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 14:34:08.031042 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 14:34:08.031137 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 14:34:08.054309 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:34:08.054309 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:34:08.058023 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:34:08.057482 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:34:08.071450 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 14:34:08.083653 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 14:34:08.108171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 14:34:08.108294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 14:34:08.114024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 14:34:08.120995 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 14:34:08.123362 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 14:34:08.129678 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 14:34:08.141324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:34:08.150639 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 14:34:08.163247 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:34:08.163458 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:34:08.163935 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 14:34:08.164309 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 14:34:08.164418 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:34:08.165397 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 14:34:08.165857 systemd[1]: Stopped target basic.target - Basic System. Jan 14 14:34:08.166233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 14:34:08.166594 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:34:08.166941 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 14:34:08.167295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 14:34:08.167654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:34:08.168017 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 14:34:08.168362 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 14:34:08.169063 systemd[1]: Stopped target swap.target - Swaps. Jan 14 14:34:08.169409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 14:34:08.169555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:34:08.170145 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:34:08.276539 ignition[1116]: INFO : Ignition 2.19.0 Jan 14 14:34:08.276539 ignition[1116]: INFO : Stage: umount Jan 14 14:34:08.276539 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:08.276539 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:08.170537 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:34:08.289908 ignition[1116]: INFO : umount: umount passed Jan 14 14:34:08.289908 ignition[1116]: INFO : Ignition finished successfully Jan 14 14:34:08.170844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 14:34:08.204054 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:34:08.206841 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 14:34:08.207009 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 14:34:08.216563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 14:34:08.216693 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:34:08.219127 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 14:34:08.219268 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 14:34:08.219474 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 14:34:08.219606 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:34:08.243215 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 14:34:08.246629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 14:34:08.248991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 14:34:08.249161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:34:08.252175 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 14:34:08.252324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:34:08.258531 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 14:34:08.258643 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 14:34:08.281738 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 14:34:08.282213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 14:34:08.285176 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 14:34:08.285292 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 14:34:08.293856 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 14:34:08.293914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 14:34:08.303569 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 14:34:08.303624 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 14:34:08.310597 systemd[1]: Stopped target network.target - Network. Jan 14 14:34:08.314926 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 14:34:08.315001 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:34:08.365777 systemd[1]: Stopped target paths.target - Path Units. Jan 14 14:34:08.365881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 14:34:08.372202 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:34:08.372296 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 14:34:08.373037 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 14:34:08.373454 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 14:34:08.373510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:34:08.373819 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 14:34:08.373855 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:34:08.374145 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 14:34:08.374189 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 14:34:08.374519 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 14:34:08.374553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 14:34:08.375007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 14:34:08.375273 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 14:34:08.377142 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 14:34:08.403626 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 14:34:08.403748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 14:34:08.407530 systemd-networkd[873]: eth0: DHCPv6 lease lost Jan 14 14:34:08.410063 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 14:34:08.410178 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 14:34:08.414361 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 14:34:08.414431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:34:08.440286 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 14:34:08.448576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 14:34:08.448670 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:34:08.456306 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 14:34:08.456376 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:34:08.463006 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 14:34:08.463071 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 14:34:08.467706 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 14:34:08.467759 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:34:08.477359 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:34:08.500225 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 14:34:08.500393 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:34:08.506137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 14:34:08.506184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 14:34:08.511200 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 14:34:08.511243 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:34:08.516062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 14:34:08.516112 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:34:08.516646 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 14:34:08.516690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 14:34:08.540116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:34:08.545057 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: Data path switched from VF: enP57844s1 Jan 14 14:34:08.540194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:08.550705 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 14:34:08.553196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 14:34:08.553266 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:34:08.556262 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 14:34:08.556329 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:34:08.561825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 14:34:08.564129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:34:08.567005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:08.567058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:08.570154 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 14:34:08.570246 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 14:34:08.575124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 14:34:08.575206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 14:34:08.835718 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 14:34:08.835880 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 14:34:08.840714 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 14:34:08.844685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 14:34:08.844754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 14:34:08.860662 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 14:34:09.194210 systemd[1]: Switching root. Jan 14 14:34:09.238312 systemd-journald[176]: Journal stopped Jan 14 14:34:01.048302 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 19:40:50 -00 2025 Jan 14 14:34:01.048334 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:34:01.048345 kernel: BIOS-provided physical RAM map: Jan 14 14:34:01.048352 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jan 14 14:34:01.048360 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jan 14 14:34:01.048366 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jan 14 14:34:01.048377 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jan 14 14:34:01.048386 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jan 14 14:34:01.048394 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jan 14 14:34:01.048401 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jan 14 14:34:01.048408 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jan 14 14:34:01.048417 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jan 14 14:34:01.048423 kernel: printk: bootconsole [earlyser0] enabled Jan 14 14:34:01.048430 kernel: NX (Execute Disable) protection: active Jan 14 14:34:01.048443 kernel: APIC: Static calls initialized Jan 14 14:34:01.048450 kernel: efi: EFI v2.7 by Microsoft Jan 14 14:34:01.048461 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Jan 14 14:34:01.048467 kernel: SMBIOS 3.1.0 present. Jan 14 14:34:01.048478 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jan 14 14:34:01.048485 kernel: Hypervisor detected: Microsoft Hyper-V Jan 14 14:34:01.048494 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jan 14 14:34:01.048502 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jan 14 14:34:01.048510 kernel: Hyper-V: Nested features: 0x1e0101 Jan 14 14:34:01.048519 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jan 14 14:34:01.048529 kernel: Hyper-V: Using hypercall for remote TLB flush Jan 14 14:34:01.048539 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:34:01.048550 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jan 14 14:34:01.048558 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jan 14 14:34:01.048568 kernel: tsc: Detected 2593.906 MHz processor Jan 14 14:34:01.048575 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 14 14:34:01.048584 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 14 14:34:01.048593 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jan 14 14:34:01.048600 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jan 14 14:34:01.048612 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 14 14:34:01.048619 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jan 14 14:34:01.048629 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jan 14 14:34:01.048636 kernel: Using GB pages for direct mapping Jan 14 14:34:01.048648 kernel: Secure boot disabled Jan 14 14:34:01.048655 kernel: ACPI: Early table checksum verification disabled Jan 14 14:34:01.048664 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jan 14 14:34:01.048676 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048689 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048696 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jan 14 14:34:01.048707 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jan 14 14:34:01.048715 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048724 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048734 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048743 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048752 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048762 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048770 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jan 14 14:34:01.048780 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jan 14 14:34:01.048788 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jan 14 14:34:01.048797 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jan 14 14:34:01.048806 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jan 14 14:34:01.048818 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jan 14 14:34:01.048827 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jan 14 14:34:01.048836 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jan 14 14:34:01.048845 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jan 14 14:34:01.048853 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jan 14 14:34:01.048863 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jan 14 14:34:01.048871 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 14 14:34:01.048881 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 14 14:34:01.048891 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jan 14 14:34:01.048902 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jan 14 14:34:01.048911 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jan 14 14:34:01.048920 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jan 14 14:34:01.048928 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jan 14 14:34:01.048939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jan 14 14:34:01.048947 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jan 14 14:34:01.048956 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jan 14 14:34:01.048965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jan 14 14:34:01.048973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jan 14 14:34:01.048985 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jan 14 14:34:01.048993 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jan 14 14:34:01.049003 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jan 14 14:34:01.049013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jan 14 14:34:01.049022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jan 14 14:34:01.049032 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jan 14 14:34:01.049045 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jan 14 14:34:01.049055 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jan 14 14:34:01.049063 kernel: Zone ranges: Jan 14 14:34:01.049074 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 14 14:34:01.049081 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jan 14 14:34:01.049089 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:34:01.049099 kernel: Movable zone start for each node Jan 14 14:34:01.049112 kernel: Early memory node ranges Jan 14 14:34:01.049124 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jan 14 14:34:01.049132 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jan 14 14:34:01.049143 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jan 14 14:34:01.049162 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jan 14 14:34:01.049179 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jan 14 14:34:01.049200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 14 14:34:01.049218 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jan 14 14:34:01.049231 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jan 14 14:34:01.049239 kernel: ACPI: PM-Timer IO Port: 0x408 Jan 14 14:34:01.049251 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jan 14 14:34:01.049269 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jan 14 14:34:01.049284 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 14 14:34:01.049298 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 14 14:34:01.049310 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jan 14 14:34:01.049320 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 14 14:34:01.049334 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jan 14 14:34:01.049349 kernel: Booting paravirtualized kernel on Hyper-V Jan 14 14:34:01.049362 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 14 14:34:01.049370 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 14 14:34:01.049384 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 14 14:34:01.049398 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 14 14:34:01.049405 kernel: pcpu-alloc: [0] 0 1 Jan 14 14:34:01.049422 kernel: Hyper-V: PV spinlocks enabled Jan 14 14:34:01.049438 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jan 14 14:34:01.049452 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:34:01.049460 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 14 14:34:01.049467 kernel: random: crng init done Jan 14 14:34:01.049474 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 14:34:01.049490 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 14 14:34:01.049506 kernel: Fallback order for Node 0: 0 Jan 14 14:34:01.049523 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jan 14 14:34:01.049540 kernel: Policy zone: Normal Jan 14 14:34:01.049559 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 14:34:01.049580 kernel: software IO TLB: area num 2. Jan 14 14:34:01.049591 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2299K rwdata, 22728K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Jan 14 14:34:01.049599 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 14 14:34:01.049615 kernel: ftrace: allocating 37918 entries in 149 pages Jan 14 14:34:01.049630 kernel: ftrace: allocated 149 pages with 4 groups Jan 14 14:34:01.049641 kernel: Dynamic Preempt: voluntary Jan 14 14:34:01.049649 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 14:34:01.049671 kernel: rcu: RCU event tracing is enabled. Jan 14 14:34:01.049686 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 14 14:34:01.049695 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 14:34:01.049711 kernel: Rude variant of Tasks RCU enabled. Jan 14 14:34:01.049727 kernel: Tracing variant of Tasks RCU enabled. Jan 14 14:34:01.049741 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 14:34:01.049751 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 14 14:34:01.049768 kernel: Using NULL legacy PIC Jan 14 14:34:01.049786 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jan 14 14:34:01.049796 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 14:34:01.049804 kernel: Console: colour dummy device 80x25 Jan 14 14:34:01.049821 kernel: printk: console [tty1] enabled Jan 14 14:34:01.049833 kernel: printk: console [ttyS0] enabled Jan 14 14:34:01.049841 kernel: printk: bootconsole [earlyser0] disabled Jan 14 14:34:01.049858 kernel: ACPI: Core revision 20230628 Jan 14 14:34:01.049869 kernel: Failed to register legacy timer interrupt Jan 14 14:34:01.049881 kernel: APIC: Switch to symmetric I/O mode setup Jan 14 14:34:01.049899 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jan 14 14:34:01.049913 kernel: Hyper-V: Using IPI hypercalls Jan 14 14:34:01.049921 kernel: APIC: send_IPI() replaced with hv_send_ipi() Jan 14 14:34:01.049929 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Jan 14 14:34:01.049946 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Jan 14 14:34:01.049960 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Jan 14 14:34:01.049969 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Jan 14 14:34:01.049977 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Jan 14 14:34:01.050001 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Jan 14 14:34:01.050014 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 14 14:34:01.050022 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 14 14:34:01.050035 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 14 14:34:01.050050 kernel: Spectre V2 : Mitigation: Retpolines Jan 14 14:34:01.050061 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 14 14:34:01.050069 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 14 14:34:01.050077 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jan 14 14:34:01.050085 kernel: RETBleed: Vulnerable Jan 14 14:34:01.050095 kernel: Speculative Store Bypass: Vulnerable Jan 14 14:34:01.050103 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:34:01.050111 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 14 14:34:01.050119 kernel: GDS: Unknown: Dependent on hypervisor status Jan 14 14:34:01.050126 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 14 14:34:01.050134 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 14 14:34:01.050142 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 14 14:34:01.050150 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jan 14 14:34:01.050158 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jan 14 14:34:01.050166 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jan 14 14:34:01.050174 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 14 14:34:01.050192 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jan 14 14:34:01.050210 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jan 14 14:34:01.050225 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jan 14 14:34:01.050234 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jan 14 14:34:01.050241 kernel: Freeing SMP alternatives memory: 32K Jan 14 14:34:01.050255 kernel: pid_max: default: 32768 minimum: 301 Jan 14 14:34:01.050273 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 14 14:34:01.050290 kernel: landlock: Up and running. Jan 14 14:34:01.050299 kernel: SELinux: Initializing. Jan 14 14:34:01.050308 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.050323 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.050338 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jan 14 14:34:01.050349 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:34:01.050362 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:34:01.050378 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 14 14:34:01.050388 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jan 14 14:34:01.050396 kernel: signal: max sigframe size: 3632 Jan 14 14:34:01.050416 kernel: rcu: Hierarchical SRCU implementation. Jan 14 14:34:01.050432 kernel: rcu: Max phase no-delay instances is 400. Jan 14 14:34:01.050442 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 14 14:34:01.050452 kernel: smp: Bringing up secondary CPUs ... Jan 14 14:34:01.050472 kernel: smpboot: x86: Booting SMP configuration: Jan 14 14:34:01.050484 kernel: .... node #0, CPUs: #1 Jan 14 14:34:01.050498 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jan 14 14:34:01.050511 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jan 14 14:34:01.050535 kernel: smp: Brought up 1 node, 2 CPUs Jan 14 14:34:01.050551 kernel: smpboot: Max logical packages: 1 Jan 14 14:34:01.050566 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jan 14 14:34:01.050583 kernel: devtmpfs: initialized Jan 14 14:34:01.050603 kernel: x86/mm: Memory block size: 128MB Jan 14 14:34:01.050620 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jan 14 14:34:01.050635 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 14:34:01.050651 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 14 14:34:01.050667 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 14:34:01.050684 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 14:34:01.050700 kernel: audit: initializing netlink subsys (disabled) Jan 14 14:34:01.050716 kernel: audit: type=2000 audit(1736865240.027:1): state=initialized audit_enabled=0 res=1 Jan 14 14:34:01.050732 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 14:34:01.050752 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 14 14:34:01.050768 kernel: cpuidle: using governor menu Jan 14 14:34:01.050784 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 14:34:01.050800 kernel: dca service started, version 1.12.1 Jan 14 14:34:01.050815 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jan 14 14:34:01.050832 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 14 14:34:01.050849 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 14:34:01.050864 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 14:34:01.050879 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 14:34:01.050900 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 14:34:01.050916 kernel: ACPI: Added _OSI(Module Device) Jan 14 14:34:01.050933 kernel: ACPI: Added _OSI(Processor Device) Jan 14 14:34:01.050950 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 14 14:34:01.050965 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 14:34:01.050982 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 14:34:01.050998 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 14 14:34:01.051013 kernel: ACPI: Interpreter enabled Jan 14 14:34:01.051029 kernel: ACPI: PM: (supports S0 S5) Jan 14 14:34:01.051050 kernel: ACPI: Using IOAPIC for interrupt routing Jan 14 14:34:01.051066 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 14 14:34:01.051082 kernel: PCI: Ignoring E820 reservations for host bridge windows Jan 14 14:34:01.051098 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jan 14 14:34:01.051114 kernel: iommu: Default domain type: Translated Jan 14 14:34:01.051129 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 14 14:34:01.051146 kernel: efivars: Registered efivars operations Jan 14 14:34:01.051162 kernel: PCI: Using ACPI for IRQ routing Jan 14 14:34:01.051177 kernel: PCI: System does not support PCI Jan 14 14:34:01.051209 kernel: vgaarb: loaded Jan 14 14:34:01.051227 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jan 14 14:34:01.051242 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 14:34:01.051258 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 14:34:01.051273 kernel: pnp: PnP ACPI init Jan 14 14:34:01.051289 kernel: pnp: PnP ACPI: found 3 devices Jan 14 14:34:01.051307 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 14 14:34:01.051322 kernel: NET: Registered PF_INET protocol family Jan 14 14:34:01.051338 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 14:34:01.051359 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jan 14 14:34:01.051375 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 14:34:01.051392 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 14 14:34:01.051406 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 14:34:01.051420 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jan 14 14:34:01.051435 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.051450 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jan 14 14:34:01.051466 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 14:34:01.051480 kernel: NET: Registered PF_XDP protocol family Jan 14 14:34:01.051499 kernel: PCI: CLS 0 bytes, default 64 Jan 14 14:34:01.051516 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jan 14 14:34:01.051532 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Jan 14 14:34:01.051547 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 14 14:34:01.051559 kernel: Initialise system trusted keyrings Jan 14 14:34:01.051568 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jan 14 14:34:01.051578 kernel: Key type asymmetric registered Jan 14 14:34:01.051590 kernel: Asymmetric key parser 'x509' registered Jan 14 14:34:01.051604 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 14 14:34:01.051621 kernel: io scheduler mq-deadline registered Jan 14 14:34:01.051636 kernel: io scheduler kyber registered Jan 14 14:34:01.051650 kernel: io scheduler bfq registered Jan 14 14:34:01.051665 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 14 14:34:01.051678 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 14:34:01.051693 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 14 14:34:01.051707 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jan 14 14:34:01.051720 kernel: i8042: PNP: No PS/2 controller found. Jan 14 14:34:01.051911 kernel: rtc_cmos 00:02: registered as rtc0 Jan 14 14:34:01.052044 kernel: rtc_cmos 00:02: setting system clock to 2025-01-14T14:34:00 UTC (1736865240) Jan 14 14:34:01.052160 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jan 14 14:34:01.052176 kernel: intel_pstate: CPU model not supported Jan 14 14:34:01.052215 kernel: efifb: probing for efifb Jan 14 14:34:01.052228 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jan 14 14:34:01.052241 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jan 14 14:34:01.052256 kernel: efifb: scrolling: redraw Jan 14 14:34:01.052275 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jan 14 14:34:01.052290 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 14:34:01.052304 kernel: fb0: EFI VGA frame buffer device Jan 14 14:34:01.052318 kernel: pstore: Using crash dump compression: deflate Jan 14 14:34:01.052334 kernel: pstore: Registered efi_pstore as persistent store backend Jan 14 14:34:01.052348 kernel: NET: Registered PF_INET6 protocol family Jan 14 14:34:01.052363 kernel: Segment Routing with IPv6 Jan 14 14:34:01.052377 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 14:34:01.052391 kernel: NET: Registered PF_PACKET protocol family Jan 14 14:34:01.052406 kernel: Key type dns_resolver registered Jan 14 14:34:01.052422 kernel: IPI shorthand broadcast: enabled Jan 14 14:34:01.052436 kernel: sched_clock: Marking stable (756008600, 39437000)->(971039900, -175594300) Jan 14 14:34:01.052450 kernel: registered taskstats version 1 Jan 14 14:34:01.052464 kernel: Loading compiled-in X.509 certificates Jan 14 14:34:01.052479 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: e8ca4908f7ff887d90a0430272c92dde55624447' Jan 14 14:34:01.052493 kernel: Key type .fscrypt registered Jan 14 14:34:01.052506 kernel: Key type fscrypt-provisioning registered Jan 14 14:34:01.052521 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 14:34:01.052538 kernel: ima: Allocated hash algorithm: sha1 Jan 14 14:34:01.052552 kernel: ima: No architecture policies found Jan 14 14:34:01.052566 kernel: clk: Disabling unused clocks Jan 14 14:34:01.052579 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 14 14:34:01.052594 kernel: Write protecting the kernel read-only data: 36864k Jan 14 14:34:01.052608 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 14 14:34:01.052622 kernel: Run /init as init process Jan 14 14:34:01.052637 kernel: with arguments: Jan 14 14:34:01.052651 kernel: /init Jan 14 14:34:01.052664 kernel: with environment: Jan 14 14:34:01.052681 kernel: HOME=/ Jan 14 14:34:01.052694 kernel: TERM=linux Jan 14 14:34:01.052708 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 14 14:34:01.052725 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 14:34:01.052742 systemd[1]: Detected virtualization microsoft. Jan 14 14:34:01.052757 systemd[1]: Detected architecture x86-64. Jan 14 14:34:01.052771 systemd[1]: Running in initrd. Jan 14 14:34:01.052788 systemd[1]: No hostname configured, using default hostname. Jan 14 14:34:01.052802 systemd[1]: Hostname set to . Jan 14 14:34:01.052817 systemd[1]: Initializing machine ID from random generator. Jan 14 14:34:01.052832 systemd[1]: Queued start job for default target initrd.target. Jan 14 14:34:01.052847 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:34:01.052862 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:34:01.052878 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 14:34:01.052893 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 14:34:01.052909 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 14:34:01.052924 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 14:34:01.052942 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 14 14:34:01.052958 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 14 14:34:01.052975 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:34:01.052994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:34:01.053007 systemd[1]: Reached target paths.target - Path Units. Jan 14 14:34:01.053023 systemd[1]: Reached target slices.target - Slice Units. Jan 14 14:34:01.053038 systemd[1]: Reached target swap.target - Swaps. Jan 14 14:34:01.053051 systemd[1]: Reached target timers.target - Timer Units. Jan 14 14:34:01.053064 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:34:01.053078 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:34:01.053092 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 14:34:01.053105 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 14 14:34:01.053118 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:34:01.053136 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 14:34:01.053150 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:34:01.053164 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 14:34:01.053179 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 14:34:01.053204 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 14:34:01.053218 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 14:34:01.053230 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 14:34:01.053239 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 14:34:01.056229 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 14:34:01.056258 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:01.056275 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 14:34:01.056319 systemd-journald[176]: Collecting audit messages is disabled. Jan 14 14:34:01.056355 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:34:01.056374 systemd-journald[176]: Journal started Jan 14 14:34:01.056421 systemd-journald[176]: Runtime Journal (/run/log/journal/75e83cb9b3f944b2a84b35866f111009) is 8.0M, max 158.8M, 150.8M free. Jan 14 14:34:01.062083 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 14:34:01.061947 systemd-modules-load[177]: Inserted module 'overlay' Jan 14 14:34:01.064660 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 14:34:01.068521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:01.083473 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:34:01.090632 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 14:34:01.099390 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 14:34:01.116209 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 14:34:01.121204 kernel: Bridge firewalling registered Jan 14 14:34:01.121388 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:01.129394 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 14:34:01.134313 systemd-modules-load[177]: Inserted module 'br_netfilter' Jan 14 14:34:01.135989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 14:34:01.141657 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:34:01.149460 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:34:01.162494 dracut-cmdline[201]: dracut-dracut-053 Jan 14 14:34:01.166076 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8945029ddd0f3864592f8746dde99cfcba228e0d3cb946f5938103dbe8733507 Jan 14 14:34:01.166044 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:34:01.171344 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 14:34:01.200404 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:34:01.214640 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 14:34:01.218241 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:34:01.266041 systemd-resolved[252]: Positive Trust Anchors: Jan 14 14:34:01.266066 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 14:34:01.275381 kernel: SCSI subsystem initialized Jan 14 14:34:01.266119 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 14:34:01.291734 systemd-resolved[252]: Defaulting to hostname 'linux'. Jan 14 14:34:01.295152 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 14:34:01.304261 kernel: Loading iSCSI transport class v2.0-870. Jan 14 14:34:01.301527 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:34:01.314204 kernel: iscsi: registered transport (tcp) Jan 14 14:34:01.335408 kernel: iscsi: registered transport (qla4xxx) Jan 14 14:34:01.335485 kernel: QLogic iSCSI HBA Driver Jan 14 14:34:01.371336 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 14:34:01.381332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 14:34:01.407487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 14:34:01.407576 kernel: device-mapper: uevent: version 1.0.3 Jan 14 14:34:01.411206 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 14 14:34:01.451217 kernel: raid6: avx512x4 gen() 18996 MB/s Jan 14 14:34:01.470202 kernel: raid6: avx512x2 gen() 18755 MB/s Jan 14 14:34:01.489195 kernel: raid6: avx512x1 gen() 17972 MB/s Jan 14 14:34:01.508201 kernel: raid6: avx2x4 gen() 18698 MB/s Jan 14 14:34:01.527200 kernel: raid6: avx2x2 gen() 18676 MB/s Jan 14 14:34:01.547211 kernel: raid6: avx2x1 gen() 14074 MB/s Jan 14 14:34:01.547243 kernel: raid6: using algorithm avx512x4 gen() 18996 MB/s Jan 14 14:34:01.567719 kernel: raid6: .... xor() 7263 MB/s, rmw enabled Jan 14 14:34:01.567759 kernel: raid6: using avx512x2 recovery algorithm Jan 14 14:34:01.590213 kernel: xor: automatically using best checksumming function avx Jan 14 14:34:01.736215 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 14:34:01.746046 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:34:01.753483 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:34:01.770010 systemd-udevd[396]: Using default interface naming scheme 'v255'. Jan 14 14:34:01.776401 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:34:01.789359 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 14:34:01.801301 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Jan 14 14:34:01.828361 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:34:01.837334 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 14:34:01.876055 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:34:01.890345 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 14:34:01.907018 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 14:34:01.913372 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:34:01.919780 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:34:01.929290 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 14:34:01.939366 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 14:34:01.953205 kernel: cryptd: max_cpu_qlen set to 1000 Jan 14 14:34:01.962474 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:34:01.983287 kernel: AVX2 version of gcm_enc/dec engaged. Jan 14 14:34:01.987437 kernel: AES CTR mode by8 optimization enabled Jan 14 14:34:01.996333 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:34:01.996637 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:02.005239 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:34:02.008099 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:02.008824 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:02.025545 kernel: hv_vmbus: Vmbus version:5.2 Jan 14 14:34:02.016417 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:02.035437 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:02.052560 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 14:34:02.052598 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 14:34:02.055247 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:02.058275 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:02.068451 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:02.076682 kernel: hv_vmbus: registering driver hv_storvsc Jan 14 14:34:02.083042 kernel: hv_vmbus: registering driver hyperv_keyboard Jan 14 14:34:02.083073 kernel: scsi host0: storvsc_host_t Jan 14 14:34:02.083255 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jan 14 14:34:02.083271 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jan 14 14:34:02.090792 kernel: PTP clock support registered Jan 14 14:34:02.094533 kernel: scsi host1: storvsc_host_t Jan 14 14:34:02.099204 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jan 14 14:34:02.106940 kernel: hv_utils: Registering HyperV Utility Driver Jan 14 14:34:02.106978 kernel: hv_vmbus: registering driver hv_utils Jan 14 14:34:02.115122 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:03.026101 kernel: hv_utils: Shutdown IC version 3.2 Jan 14 14:34:03.026128 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 14:34:03.026146 kernel: hv_utils: Heartbeat IC version 3.0 Jan 14 14:34:03.026164 kernel: hv_utils: TimeSync IC version 4.0 Jan 14 14:34:03.023239 systemd-resolved[252]: Clock change detected. Flushing caches. Jan 14 14:34:03.032142 kernel: hv_vmbus: registering driver hv_netvsc Jan 14 14:34:03.039730 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 14:34:03.049481 kernel: hv_vmbus: registering driver hid_hyperv Jan 14 14:34:03.054498 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jan 14 14:34:03.054529 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jan 14 14:34:03.074433 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jan 14 14:34:03.081657 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 14 14:34:03.081681 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jan 14 14:34:03.091258 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jan 14 14:34:03.091453 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 14 14:34:03.091719 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jan 14 14:34:03.091902 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jan 14 14:34:03.092063 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jan 14 14:34:03.092239 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:34:03.092259 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 14 14:34:03.085160 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:03.203485 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: VF slot 1 added Jan 14 14:34:03.213570 kernel: hv_vmbus: registering driver hv_pci Jan 14 14:34:03.218489 kernel: hv_pci 85531272-e1f4-4dca-97e9-2f49a41529da: PCI VMBus probing: Using version 0x10004 Jan 14 14:34:03.258456 kernel: hv_pci 85531272-e1f4-4dca-97e9-2f49a41529da: PCI host bridge to bus e1f4:00 Jan 14 14:34:03.258934 kernel: pci_bus e1f4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jan 14 14:34:03.259119 kernel: pci_bus e1f4:00: No busn resource found for root bus, will use [bus 00-ff] Jan 14 14:34:03.259277 kernel: pci e1f4:00:02.0: [15b3:1016] type 00 class 0x020000 Jan 14 14:34:03.259488 kernel: pci e1f4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:34:03.259674 kernel: pci e1f4:00:02.0: enabling Extended Tags Jan 14 14:34:03.259851 kernel: pci e1f4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e1f4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jan 14 14:34:03.260040 kernel: pci_bus e1f4:00: busn_res: [bus 00-ff] end is updated to 00 Jan 14 14:34:03.260190 kernel: pci e1f4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jan 14 14:34:03.247565 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jan 14 14:34:03.280767 kernel: BTRFS: device fsid b8e2d3c5-4bed-4339-bed5-268c66823686 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (440) Jan 14 14:34:03.287500 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (465) Jan 14 14:34:03.305001 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jan 14 14:34:03.337063 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 14:34:03.351633 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jan 14 14:34:03.358795 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jan 14 14:34:03.373679 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 14:34:03.470088 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:34:03.476280 kernel: mlx5_core e1f4:00:02.0: enabling device (0000 -> 0002) Jan 14 14:34:03.720889 kernel: mlx5_core e1f4:00:02.0: firmware version: 14.30.5000 Jan 14 14:34:03.721115 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: VF registering: eth1 Jan 14 14:34:03.721277 kernel: mlx5_core e1f4:00:02.0 eth1: joined to eth0 Jan 14 14:34:03.721452 kernel: mlx5_core e1f4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Jan 14 14:34:03.727484 kernel: mlx5_core e1f4:00:02.0 enP57844s1: renamed from eth1 Jan 14 14:34:04.486298 disk-uuid[594]: The operation has completed successfully. Jan 14 14:34:04.489847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 14 14:34:04.571831 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 14:34:04.571957 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 14:34:04.592627 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 14 14:34:04.598086 sh[689]: Success Jan 14 14:34:04.618584 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 14 14:34:04.693784 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 14 14:34:04.706588 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 14 14:34:04.711568 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 14 14:34:04.738783 kernel: BTRFS info (device dm-0): first mount of filesystem b8e2d3c5-4bed-4339-bed5-268c66823686 Jan 14 14:34:04.738851 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:04.742133 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 14 14:34:04.744636 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 14:34:04.746845 kernel: BTRFS info (device dm-0): using free space tree Jan 14 14:34:04.839508 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 14 14:34:04.845105 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 14:34:04.860633 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 14:34:04.864950 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 14:34:04.881476 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:04.881516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:04.885991 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:34:04.895496 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:34:04.910096 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:04.909674 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 14 14:34:04.921089 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 14:34:04.932686 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 14:34:04.962888 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:34:04.971879 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 14:34:04.991618 systemd-networkd[873]: lo: Link UP Jan 14 14:34:04.991627 systemd-networkd[873]: lo: Gained carrier Jan 14 14:34:04.993698 systemd-networkd[873]: Enumeration completed Jan 14 14:34:04.993952 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 14:34:05.001839 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:05.001844 systemd-networkd[873]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:34:05.002293 systemd[1]: Reached target network.target - Network. Jan 14 14:34:05.071497 kernel: mlx5_core e1f4:00:02.0 enP57844s1: Link up Jan 14 14:34:05.106508 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: Data path switched to VF: enP57844s1 Jan 14 14:34:05.106975 systemd-networkd[873]: enP57844s1: Link UP Jan 14 14:34:05.107098 systemd-networkd[873]: eth0: Link UP Jan 14 14:34:05.107284 systemd-networkd[873]: eth0: Gained carrier Jan 14 14:34:05.107298 systemd-networkd[873]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:05.111252 systemd-networkd[873]: enP57844s1: Gained carrier Jan 14 14:34:05.147621 systemd-networkd[873]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:34:05.213868 ignition[820]: Ignition 2.19.0 Jan 14 14:34:05.213882 ignition[820]: Stage: fetch-offline Jan 14 14:34:05.215732 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:34:05.213930 ignition[820]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.213940 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.214064 ignition[820]: parsed url from cmdline: "" Jan 14 14:34:05.214069 ignition[820]: no config URL provided Jan 14 14:34:05.214077 ignition[820]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:34:05.227542 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 14:34:05.214087 ignition[820]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:34:05.214094 ignition[820]: failed to fetch config: resource requires networking Jan 14 14:34:05.214319 ignition[820]: Ignition finished successfully Jan 14 14:34:05.260591 ignition[882]: Ignition 2.19.0 Jan 14 14:34:05.260603 ignition[882]: Stage: fetch Jan 14 14:34:05.260825 ignition[882]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.260842 ignition[882]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.260969 ignition[882]: parsed url from cmdline: "" Jan 14 14:34:05.260974 ignition[882]: no config URL provided Jan 14 14:34:05.260981 ignition[882]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 14:34:05.260989 ignition[882]: no config at "/usr/lib/ignition/user.ign" Jan 14 14:34:05.261009 ignition[882]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jan 14 14:34:05.357813 ignition[882]: GET result: OK Jan 14 14:34:05.357934 ignition[882]: config has been read from IMDS userdata Jan 14 14:34:05.357965 ignition[882]: parsing config with SHA512: abef57d44710a2f2fee4673ea91c306ffc1a3965e10143e11f062574615b7b1dd7da352d4a0f6c2a9d14ed6cd6f004e2fd6521a77536f26f0433e7c19e0c16ed Jan 14 14:34:05.363633 unknown[882]: fetched base config from "system" Jan 14 14:34:05.363648 unknown[882]: fetched base config from "system" Jan 14 14:34:05.364161 ignition[882]: fetch: fetch complete Jan 14 14:34:05.363661 unknown[882]: fetched user config from "azure" Jan 14 14:34:05.364167 ignition[882]: fetch: fetch passed Jan 14 14:34:05.365937 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 14:34:05.364217 ignition[882]: Ignition finished successfully Jan 14 14:34:05.378645 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 14:34:05.394938 ignition[888]: Ignition 2.19.0 Jan 14 14:34:05.394948 ignition[888]: Stage: kargs Jan 14 14:34:05.395191 ignition[888]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.398129 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 14:34:05.395205 ignition[888]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.396074 ignition[888]: kargs: kargs passed Jan 14 14:34:05.396120 ignition[888]: Ignition finished successfully Jan 14 14:34:05.413653 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 14:34:05.430353 ignition[895]: Ignition 2.19.0 Jan 14 14:34:05.430364 ignition[895]: Stage: disks Jan 14 14:34:05.430622 ignition[895]: no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:05.430638 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:05.431481 ignition[895]: disks: disks passed Jan 14 14:34:05.431532 ignition[895]: Ignition finished successfully Jan 14 14:34:05.441755 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 14:34:05.444173 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 14:34:05.448967 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 14:34:05.451905 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 14:34:05.456616 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 14:34:05.461335 systemd[1]: Reached target basic.target - Basic System. Jan 14 14:34:05.476617 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 14:34:05.510420 systemd-fsck[903]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jan 14 14:34:05.514562 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 14:34:05.527553 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 14:34:05.616826 kernel: EXT4-fs (sda9): mounted filesystem 39899d4c-a8b1-4feb-9875-e812cc535888 r/w with ordered data mode. Quota mode: none. Jan 14 14:34:05.617481 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 14:34:05.621652 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 14:34:05.642569 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:34:05.646390 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 14:34:05.654633 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 14 14:34:05.669754 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (914) Jan 14 14:34:05.669791 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:05.669813 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:05.669833 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:34:05.660297 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 14:34:05.660329 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:34:05.681823 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:34:05.684624 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:34:05.691014 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 14:34:05.695690 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 14:34:05.862144 coreos-metadata[916]: Jan 14 14:34:05.862 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 14:34:05.866000 coreos-metadata[916]: Jan 14 14:34:05.865 INFO Fetch successful Jan 14 14:34:05.866000 coreos-metadata[916]: Jan 14 14:34:05.865 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jan 14 14:34:05.880430 coreos-metadata[916]: Jan 14 14:34:05.880 INFO Fetch successful Jan 14 14:34:05.883903 coreos-metadata[916]: Jan 14 14:34:05.883 INFO wrote hostname ci-4081.3.0-a-0bb245c6fa to /sysroot/etc/hostname Jan 14 14:34:05.886176 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:34:05.900049 initrd-setup-root[944]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 14:34:05.910613 initrd-setup-root[951]: cut: /sysroot/etc/group: No such file or directory Jan 14 14:34:05.918554 initrd-setup-root[958]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 14:34:05.923799 initrd-setup-root[965]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 14:34:06.188273 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 14:34:06.197685 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 14:34:06.205691 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 14:34:06.214488 kernel: BTRFS info (device sda6): last unmount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:06.208991 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 14:34:06.238158 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 14:34:06.247436 ignition[1035]: INFO : Ignition 2.19.0 Jan 14 14:34:06.247436 ignition[1035]: INFO : Stage: mount Jan 14 14:34:06.251311 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:06.251311 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:06.251311 ignition[1035]: INFO : mount: mount passed Jan 14 14:34:06.251311 ignition[1035]: INFO : Ignition finished successfully Jan 14 14:34:06.261952 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 14:34:06.270579 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 14:34:06.279223 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 14:34:06.295484 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1046) Jan 14 14:34:06.299478 kernel: BTRFS info (device sda6): first mount of filesystem 70d8a0b5-70da-4efb-a618-d15543718b1e Jan 14 14:34:06.299519 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 14 14:34:06.303510 kernel: BTRFS info (device sda6): using free space tree Jan 14 14:34:06.308778 kernel: BTRFS info (device sda6): auto enabling async discard Jan 14 14:34:06.310236 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 14:34:06.330451 ignition[1063]: INFO : Ignition 2.19.0 Jan 14 14:34:06.330451 ignition[1063]: INFO : Stage: files Jan 14 14:34:06.334174 ignition[1063]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:06.334174 ignition[1063]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:06.334174 ignition[1063]: DEBUG : files: compiled without relabeling support, skipping Jan 14 14:34:06.341767 ignition[1063]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 14:34:06.341767 ignition[1063]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 14:34:06.363769 ignition[1063]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 14:34:06.367364 ignition[1063]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 14:34:06.370895 unknown[1063]: wrote ssh authorized keys file for user: core Jan 14 14:34:06.373372 ignition[1063]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 14:34:06.376849 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:34:06.381276 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 14 14:34:06.465712 systemd-networkd[873]: enP57844s1: Gained IPv6LL Jan 14 14:34:06.466084 systemd-networkd[873]: eth0: Gained IPv6LL Jan 14 14:34:06.718180 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 14:34:07.135617 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 14 14:34:07.135617 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 14:34:07.144796 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:34:07.160767 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 14:34:07.164538 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:34:07.168447 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 14:34:07.172760 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:34:07.176758 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.180747 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 14 14:34:07.612140 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 14:34:07.970065 ignition[1063]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 14 14:34:07.970065 ignition[1063]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 14:34:07.981372 ignition[1063]: INFO : files: files passed Jan 14 14:34:07.981372 ignition[1063]: INFO : Ignition finished successfully Jan 14 14:34:07.976423 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 14:34:08.018795 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 14:34:08.028179 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 14:34:08.031042 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 14:34:08.031137 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 14:34:08.054309 initrd-setup-root-after-ignition[1092]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:34:08.054309 initrd-setup-root-after-ignition[1092]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:34:08.058023 initrd-setup-root-after-ignition[1096]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 14:34:08.057482 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:34:08.071450 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 14:34:08.083653 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 14:34:08.108171 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 14:34:08.108294 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 14:34:08.114024 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 14:34:08.120995 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 14:34:08.123362 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 14:34:08.129678 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 14:34:08.141324 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:34:08.150639 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 14:34:08.163247 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:34:08.163458 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:34:08.163935 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 14:34:08.164309 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 14:34:08.164418 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 14:34:08.165397 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 14:34:08.165857 systemd[1]: Stopped target basic.target - Basic System. Jan 14 14:34:08.166233 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 14:34:08.166594 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 14:34:08.166941 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 14:34:08.167295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 14:34:08.167654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 14:34:08.168017 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 14:34:08.168362 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 14:34:08.169063 systemd[1]: Stopped target swap.target - Swaps. Jan 14 14:34:08.169409 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 14:34:08.169555 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 14:34:08.170145 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:34:08.276539 ignition[1116]: INFO : Ignition 2.19.0 Jan 14 14:34:08.276539 ignition[1116]: INFO : Stage: umount Jan 14 14:34:08.276539 ignition[1116]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 14:34:08.276539 ignition[1116]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jan 14 14:34:08.170537 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:34:08.289908 ignition[1116]: INFO : umount: umount passed Jan 14 14:34:08.289908 ignition[1116]: INFO : Ignition finished successfully Jan 14 14:34:08.170844 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 14:34:08.204054 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:34:08.206841 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 14:34:08.207009 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 14:34:08.216563 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 14:34:08.216693 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 14:34:08.219127 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 14:34:08.219268 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 14:34:08.219474 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 14 14:34:08.219606 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 14 14:34:08.243215 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 14:34:08.246629 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 14:34:08.248991 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 14:34:08.249161 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:34:08.252175 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 14:34:08.252324 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 14:34:08.258531 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 14:34:08.258643 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 14:34:08.281738 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 14:34:08.282213 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 14:34:08.285176 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 14:34:08.285292 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 14:34:08.293856 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 14:34:08.293914 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 14:34:08.303569 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 14:34:08.303624 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 14:34:08.310597 systemd[1]: Stopped target network.target - Network. Jan 14 14:34:08.314926 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 14:34:08.315001 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 14:34:08.365777 systemd[1]: Stopped target paths.target - Path Units. Jan 14 14:34:08.365881 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 14:34:08.372202 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:34:08.372296 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 14:34:08.373037 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 14:34:08.373454 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 14:34:08.373510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 14:34:08.373819 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 14:34:08.373855 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 14:34:08.374145 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 14:34:08.374189 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 14:34:08.374519 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 14:34:08.374553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 14:34:08.375007 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 14:34:08.375273 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 14:34:08.377142 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 14:34:08.403626 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 14:34:08.403748 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 14:34:08.407530 systemd-networkd[873]: eth0: DHCPv6 lease lost Jan 14 14:34:08.410063 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 14:34:08.410178 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 14:34:08.414361 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 14:34:08.414431 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:34:08.440286 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 14:34:08.448576 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 14:34:08.448670 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 14:34:08.456306 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 14:34:08.456376 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:34:08.463006 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 14:34:08.463071 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 14:34:08.467706 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 14:34:08.467759 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:34:08.477359 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:34:08.500225 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 14:34:08.500393 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:34:08.506137 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 14:34:08.506184 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 14:34:08.511200 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 14:34:08.511243 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:34:08.516062 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 14:34:08.516112 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 14:34:08.516646 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 14:34:08.516690 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 14:34:08.540116 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 14:34:08.545057 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: Data path switched from VF: enP57844s1 Jan 14 14:34:08.540194 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 14:34:08.550705 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 14:34:08.553196 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 14:34:08.553266 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:34:08.556262 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 14 14:34:08.556329 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:34:08.561825 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 14:34:08.564129 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:34:08.567005 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:08.567058 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:08.570154 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 14:34:08.570246 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 14:34:08.575124 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 14:34:08.575206 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 14:34:08.835718 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 14:34:08.835880 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 14:34:08.840714 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 14:34:08.844685 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 14:34:08.844754 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 14:34:08.860662 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 14:34:09.194210 systemd[1]: Switching root. Jan 14 14:34:09.238312 systemd-journald[176]: Journal stopped Jan 14 14:34:11.081542 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Jan 14 14:34:11.081605 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 14:34:11.081628 kernel: SELinux: policy capability open_perms=1 Jan 14 14:34:11.081646 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 14:34:11.081665 kernel: SELinux: policy capability always_check_network=0 Jan 14 14:34:11.081686 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 14:34:11.081704 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 14:34:11.081728 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 14:34:11.081750 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 14:34:11.081769 kernel: audit: type=1403 audit(1736865249.645:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 14 14:34:11.081792 systemd[1]: Successfully loaded SELinux policy in 96.053ms. Jan 14 14:34:11.081817 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.436ms. Jan 14 14:34:11.081839 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 14 14:34:11.081858 systemd[1]: Detected virtualization microsoft. Jan 14 14:34:11.081888 systemd[1]: Detected architecture x86-64. Jan 14 14:34:11.081909 systemd[1]: Detected first boot. Jan 14 14:34:11.081931 systemd[1]: Hostname set to . Jan 14 14:34:11.081951 systemd[1]: Initializing machine ID from random generator. Jan 14 14:34:11.081971 zram_generator::config[1159]: No configuration found. Jan 14 14:34:11.081996 systemd[1]: Populated /etc with preset unit settings. Jan 14 14:34:11.082015 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 14:34:11.082030 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 14:34:11.082081 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 14:34:11.082101 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 14:34:11.082114 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 14:34:11.082138 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 14:34:11.082157 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 14:34:11.082174 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 14:34:11.082190 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 14:34:11.082358 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 14:34:11.082376 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 14:34:11.082386 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 14:34:11.082397 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 14:34:11.082407 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 14:34:11.082423 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 14:34:11.082435 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 14:34:11.082448 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 14:34:11.082460 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 14 14:34:11.082487 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 14:34:11.082499 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 14:34:11.082517 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 14:34:11.082530 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 14:34:11.082543 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 14:34:11.082556 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 14:34:11.082569 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 14:34:11.082579 systemd[1]: Reached target slices.target - Slice Units. Jan 14 14:34:11.082589 systemd[1]: Reached target swap.target - Swaps. Jan 14 14:34:11.082601 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 14:34:11.082611 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 14:34:11.082626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 14:34:11.082637 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 14:34:11.082650 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 14:34:11.082662 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 14:34:11.082674 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 14:34:11.082690 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 14:34:11.082700 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 14:34:11.082713 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:11.082729 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 14:34:11.082740 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 14:34:11.082752 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 14:34:11.082766 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 14:34:11.082776 systemd[1]: Reached target machines.target - Containers. Jan 14 14:34:11.082791 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 14:34:11.082803 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:34:11.082817 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 14:34:11.082830 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 14:34:11.082843 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:34:11.082857 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 14:34:11.082869 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:34:11.082882 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 14:34:11.082895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:34:11.082908 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 14:34:11.082921 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 14:34:11.082933 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 14:34:11.082944 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 14:34:11.082957 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 14:34:11.082967 kernel: fuse: init (API version 7.39) Jan 14 14:34:11.082980 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 14:34:11.082994 kernel: loop: module loaded Jan 14 14:34:11.083007 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 14:34:11.083021 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 14:34:11.083032 kernel: ACPI: bus type drm_connector registered Jan 14 14:34:11.083066 systemd-journald[1251]: Collecting audit messages is disabled. Jan 14 14:34:11.083096 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 14:34:11.083107 systemd-journald[1251]: Journal started Jan 14 14:34:11.083133 systemd-journald[1251]: Runtime Journal (/run/log/journal/3514efd127d84fe5ab22ac70f40592f9) is 8.0M, max 158.8M, 150.8M free. Jan 14 14:34:10.543273 systemd[1]: Queued start job for default target multi-user.target. Jan 14 14:34:10.588884 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 14 14:34:10.589272 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 14:34:11.093955 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 14:34:11.100269 systemd[1]: verity-setup.service: Deactivated successfully. Jan 14 14:34:11.100351 systemd[1]: Stopped verity-setup.service. Jan 14 14:34:11.112498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:11.116494 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 14:34:11.119845 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 14:34:11.122371 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 14:34:11.125000 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 14:34:11.127582 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 14:34:11.130566 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 14:34:11.133600 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 14:34:11.136377 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 14:34:11.139839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 14:34:11.143234 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 14:34:11.143654 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 14:34:11.147121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:34:11.147452 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:34:11.151143 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 14:34:11.151447 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 14:34:11.154454 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:34:11.154853 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:34:11.158331 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 14:34:11.158651 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 14:34:11.161947 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:34:11.162236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:34:11.165435 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 14:34:11.168752 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 14:34:11.172142 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 14:34:11.190166 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 14:34:11.200624 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 14:34:11.206241 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 14:34:11.209095 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 14:34:11.209227 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 14:34:11.212760 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 14 14:34:11.216481 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 14:34:11.224592 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 14:34:11.227088 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:34:11.228823 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 14:34:11.233744 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 14:34:11.236560 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 14:34:11.238725 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 14:34:11.241705 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 14:34:11.245188 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 14:34:11.249636 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 14:34:11.255312 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 14:34:11.270661 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 14:34:11.274790 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 14:34:11.284984 systemd-journald[1251]: Time spent on flushing to /var/log/journal/3514efd127d84fe5ab22ac70f40592f9 is 29.063ms for 959 entries. Jan 14 14:34:11.284984 systemd-journald[1251]: System Journal (/var/log/journal/3514efd127d84fe5ab22ac70f40592f9) is 8.0M, max 2.6G, 2.6G free. Jan 14 14:34:11.330907 systemd-journald[1251]: Received client request to flush runtime journal. Jan 14 14:34:11.330947 kernel: loop0: detected capacity change from 0 to 140768 Jan 14 14:34:11.282308 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 14:34:11.289054 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 14:34:11.292882 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 14:34:11.307735 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 14:34:11.316644 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 14 14:34:11.328356 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 14 14:34:11.333162 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 14:34:11.363611 udevadm[1305]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 14 14:34:11.393092 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 14:34:11.410196 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 14:34:11.410224 systemd-tmpfiles[1296]: ACLs are not supported, ignoring. Jan 14 14:34:11.416219 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 14:34:11.425675 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 14:34:11.442747 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 14:34:11.445916 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 14 14:34:11.522490 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 14:34:11.546023 kernel: loop1: detected capacity change from 0 to 210664 Jan 14 14:34:11.558879 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 14:34:11.572724 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 14:34:11.584753 kernel: loop2: detected capacity change from 0 to 31056 Jan 14 14:34:11.592182 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 14 14:34:11.592208 systemd-tmpfiles[1318]: ACLs are not supported, ignoring. Jan 14 14:34:11.599811 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 14:34:11.699495 kernel: loop3: detected capacity change from 0 to 142488 Jan 14 14:34:11.838496 kernel: loop4: detected capacity change from 0 to 140768 Jan 14 14:34:11.861501 kernel: loop5: detected capacity change from 0 to 210664 Jan 14 14:34:11.871640 kernel: loop6: detected capacity change from 0 to 31056 Jan 14 14:34:11.882487 kernel: loop7: detected capacity change from 0 to 142488 Jan 14 14:34:11.901205 (sd-merge)[1323]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jan 14 14:34:11.902113 (sd-merge)[1323]: Merged extensions into '/usr'. Jan 14 14:34:11.916505 systemd[1]: Reloading requested from client PID 1295 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 14:34:11.916525 systemd[1]: Reloading... Jan 14 14:34:12.007519 zram_generator::config[1345]: No configuration found. Jan 14 14:34:12.215676 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:34:12.311273 systemd[1]: Reloading finished in 394 ms. Jan 14 14:34:12.345040 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 14:34:12.354655 systemd[1]: Starting ensure-sysext.service... Jan 14 14:34:12.359350 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 14:34:12.379695 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 14:34:12.380165 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 14 14:34:12.381065 systemd-tmpfiles[1408]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 14 14:34:12.381344 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Jan 14 14:34:12.381395 systemd-tmpfiles[1408]: ACLs are not supported, ignoring. Jan 14 14:34:12.390130 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 14:34:12.390144 systemd-tmpfiles[1408]: Skipping /boot Jan 14 14:34:12.402177 systemd[1]: Reloading requested from client PID 1407 ('systemctl') (unit ensure-sysext.service)... Jan 14 14:34:12.402200 systemd[1]: Reloading... Jan 14 14:34:12.402786 systemd-tmpfiles[1408]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 14:34:12.402794 systemd-tmpfiles[1408]: Skipping /boot Jan 14 14:34:12.504529 zram_generator::config[1433]: No configuration found. Jan 14 14:34:12.642970 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:34:12.712173 systemd[1]: Reloading finished in 309 ms. Jan 14 14:34:12.729641 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 14:34:12.737386 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 14:34:12.755328 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 14 14:34:12.767577 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 14:34:12.778798 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 14:34:12.792909 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 14:34:12.806815 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 14:34:12.818780 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 14:34:12.825254 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:12.825556 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:34:12.832742 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:34:12.843822 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:34:12.856767 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:34:12.862028 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:34:12.870770 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 14:34:12.877646 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:12.885023 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 14:34:12.890743 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:34:12.890951 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:34:12.898397 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:34:12.898658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:34:12.905591 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:34:12.905801 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:34:12.920375 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:12.920724 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:34:12.924813 systemd-udevd[1508]: Using default interface naming scheme 'v255'. Jan 14 14:34:12.929893 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:34:12.934056 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:34:12.949056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:34:12.953894 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:34:12.954062 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:12.955641 augenrules[1522]: No rules Jan 14 14:34:12.955376 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 14:34:12.964093 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 14:34:12.976622 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 14 14:34:12.980267 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:34:12.980460 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:34:12.984558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:34:12.984739 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:34:12.989140 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:34:12.989311 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:34:12.997245 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 14:34:13.016940 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Jan 14 14:34:13.020091 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:13.020365 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 14:34:13.029681 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 14:34:13.038488 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 14:34:13.050617 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 14:34:13.059969 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 14:34:13.063973 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 14:34:13.064066 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 14:34:13.070117 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 14:34:13.070154 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 14 14:34:13.070887 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 14:34:13.080639 systemd[1]: Finished ensure-sysext.service. Jan 14 14:34:13.085367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 14:34:13.086546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 14:34:13.087282 systemd-resolved[1507]: Positive Trust Anchors: Jan 14 14:34:13.087625 systemd-resolved[1507]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 14:34:13.087671 systemd-resolved[1507]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 14:34:13.092744 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 14:34:13.092922 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 14:34:13.100456 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 14:34:13.100730 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 14:34:13.105360 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 14:34:13.105635 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 14:34:13.133376 systemd-resolved[1507]: Using system hostname 'ci-4081.3.0-a-0bb245c6fa'. Jan 14 14:34:13.145645 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 14:34:13.150892 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 14:34:13.150977 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 14:34:13.151127 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 14:34:13.159179 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 14 14:34:13.159229 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 14:34:13.307497 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 14:34:13.312518 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 14:34:13.318523 systemd-networkd[1575]: lo: Link UP Jan 14 14:34:13.318542 systemd-networkd[1575]: lo: Gained carrier Jan 14 14:34:13.325215 systemd-networkd[1575]: Enumeration completed Jan 14 14:34:13.325331 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 14:34:13.328218 systemd[1]: Reached target network.target - Network. Jan 14 14:34:13.333509 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:13.333514 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:34:13.353490 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1568) Jan 14 14:34:13.343614 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 14:34:13.350145 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 14:34:13.366134 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 14:34:13.414553 kernel: hv_vmbus: registering driver hv_balloon Jan 14 14:34:13.416010 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 14:34:13.424512 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jan 14 14:34:13.427478 kernel: mlx5_core e1f4:00:02.0 enP57844s1: Link up Jan 14 14:34:13.439757 kernel: hv_vmbus: registering driver hyperv_fb Jan 14 14:34:13.439801 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jan 14 14:34:13.444481 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jan 14 14:34:13.449314 kernel: Console: switching to colour dummy device 80x25 Jan 14 14:34:13.455491 kernel: hv_netvsc 7c1e5220-295d-7c1e-5220-295d7c1e5220 eth0: Data path switched to VF: enP57844s1 Jan 14 14:34:13.460088 kernel: Console: switching to colour frame buffer device 128x48 Jan 14 14:34:13.461844 systemd-networkd[1575]: enP57844s1: Link UP Jan 14 14:34:13.463300 systemd-networkd[1575]: eth0: Link UP Jan 14 14:34:13.463310 systemd-networkd[1575]: eth0: Gained carrier Jan 14 14:34:13.463338 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:13.469619 systemd-networkd[1575]: enP57844s1: Gained carrier Jan 14 14:34:13.476614 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Jan 14 14:34:13.490520 systemd-networkd[1575]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:34:13.619833 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:13.632265 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jan 14 14:34:13.644806 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 14:34:13.649824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:13.651562 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:13.662605 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:13.698914 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 14:34:13.710896 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 14:34:13.711140 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:13.725808 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 14:34:13.732480 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Jan 14 14:34:13.812239 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 14 14:34:13.823822 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 14 14:34:13.836433 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 14:34:13.849095 lvm[1648]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 14:34:13.899526 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 14 14:34:13.903751 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 14:34:13.906778 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 14:34:13.909896 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 14:34:13.912820 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 14:34:13.915885 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 14:34:13.918354 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 14:34:13.921124 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 14:34:13.923915 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 14:34:13.923951 systemd[1]: Reached target paths.target - Path Units. Jan 14 14:34:13.925951 systemd[1]: Reached target timers.target - Timer Units. Jan 14 14:34:13.928760 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 14:34:13.932825 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 14:34:13.942576 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 14:34:13.946454 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 14 14:34:13.950063 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 14:34:13.952706 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 14:34:13.955277 systemd[1]: Reached target basic.target - Basic System. Jan 14 14:34:13.957326 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 14:34:13.957479 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 14:34:13.972988 lvm[1654]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 14 14:34:13.984563 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 14:34:13.993650 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 14:34:14.000648 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 14:34:14.013706 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 14:34:14.020588 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 14:34:14.035659 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 14:34:14.040367 (chronyd)[1655]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Jan 14 14:34:14.040856 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 14:34:14.040903 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Jan 14 14:34:14.043720 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Jan 14 14:34:14.046269 jq[1659]: false Jan 14 14:34:14.046404 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Jan 14 14:34:14.056104 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 14:34:14.060905 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 14:34:14.070118 chronyd[1669]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Jan 14 14:34:14.070705 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 14:34:14.076265 KVP[1663]: KVP starting; pid is:1663 Jan 14 14:34:14.075641 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 14:34:14.084253 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 14:34:14.089080 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 14:34:14.090677 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 14:34:14.097657 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 14:34:14.098313 dbus-daemon[1658]: [system] SELinux support is enabled Jan 14 14:34:14.103818 chronyd[1669]: Timezone right/UTC failed leap second check, ignoring Jan 14 14:34:14.104051 chronyd[1669]: Loaded seccomp filter (level 2) Jan 14 14:34:14.106633 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 14:34:14.110298 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 14:34:14.118490 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 14 14:34:14.121864 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 14:34:14.125121 KVP[1663]: KVP LIC Version: 3.1 Jan 14 14:34:14.125503 kernel: hv_utils: KVP IC version 4.0 Jan 14 14:34:14.127365 extend-filesystems[1662]: Found loop4 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found loop5 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found loop6 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found loop7 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda1 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda2 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda3 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found usr Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda4 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda6 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda7 Jan 14 14:34:14.129206 extend-filesystems[1662]: Found sda9 Jan 14 14:34:14.129206 extend-filesystems[1662]: Checking size of /dev/sda9 Jan 14 14:34:14.159807 coreos-metadata[1657]: Jan 14 14:34:14.156 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jan 14 14:34:14.179738 jq[1679]: true Jan 14 14:34:14.180067 extend-filesystems[1662]: Old size kept for /dev/sda9 Jan 14 14:34:14.180067 extend-filesystems[1662]: Found sr0 Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.160 INFO Fetch successful Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.161 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.166 INFO Fetch successful Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.166 INFO Fetching http://168.63.129.16/machine/16ef705b-3b22-4df8-a090-16c5b5b77d91/5a18ef58%2D4215%2D4501%2D9ace%2Dc63ec664ace5.%5Fci%2D4081.3.0%2Da%2D0bb245c6fa?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.168 INFO Fetch successful Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.168 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jan 14 14:34:14.200698 coreos-metadata[1657]: Jan 14 14:34:14.182 INFO Fetch successful Jan 14 14:34:14.164244 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 14:34:14.165029 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 14:34:14.165482 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 14:34:14.165698 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 14:34:14.170450 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 14:34:14.170710 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 14:34:14.192674 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 14:34:14.192891 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 14:34:14.231935 jq[1696]: true Jan 14 14:34:14.241987 (ntainerd)[1697]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 14 14:34:14.251503 update_engine[1674]: I20250114 14:34:14.245180 1674 main.cc:92] Flatcar Update Engine starting Jan 14 14:34:14.258638 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 14:34:14.258682 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 14:34:14.266664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 14:34:14.266689 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 14:34:14.277507 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1560) Jan 14 14:34:14.286779 systemd[1]: Started update-engine.service - Update Engine. Jan 14 14:34:14.290765 update_engine[1674]: I20250114 14:34:14.290540 1674 update_check_scheduler.cc:74] Next update check in 7m45s Jan 14 14:34:14.299549 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 14:34:14.310364 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 14:34:14.321672 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 14:34:14.338575 systemd-logind[1672]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 14 14:34:14.338903 systemd-logind[1672]: New seat seat0. Jan 14 14:34:14.342056 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 14:34:14.351312 tar[1688]: linux-amd64/helm Jan 14 14:34:14.440496 bash[1752]: Updated "/home/core/.ssh/authorized_keys" Jan 14 14:34:14.442605 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 14:34:14.460189 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 14 14:34:14.555739 locksmithd[1738]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 14:34:14.665078 systemd-networkd[1575]: eth0: Gained IPv6LL Jan 14 14:34:14.672405 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 14:34:14.677962 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 14:34:14.688111 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:14.698923 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 14:34:14.794425 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 14:34:14.915493 systemd-networkd[1575]: enP57844s1: Gained IPv6LL Jan 14 14:34:15.056365 containerd[1697]: time="2025-01-14T14:34:15.056210800Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 14 14:34:15.154948 containerd[1697]: time="2025-01-14T14:34:15.154560800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.160429 containerd[1697]: time="2025-01-14T14:34:15.160291900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:34:15.160429 containerd[1697]: time="2025-01-14T14:34:15.160345100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 14 14:34:15.160429 containerd[1697]: time="2025-01-14T14:34:15.160370500Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.161386700Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.161417500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.161861400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.161884500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.162452500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.162506100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.162526600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.162541800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.162637200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163609 containerd[1697]: time="2025-01-14T14:34:15.162867300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163930 containerd[1697]: time="2025-01-14T14:34:15.163769700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 14 14:34:15.163930 containerd[1697]: time="2025-01-14T14:34:15.163796100Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 14 14:34:15.163930 containerd[1697]: time="2025-01-14T14:34:15.163900700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 14 14:34:15.164037 containerd[1697]: time="2025-01-14T14:34:15.163964900Z" level=info msg="metadata content store policy set" policy=shared Jan 14 14:34:15.180108 containerd[1697]: time="2025-01-14T14:34:15.180074200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 14 14:34:15.180202 containerd[1697]: time="2025-01-14T14:34:15.180143000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 14 14:34:15.180202 containerd[1697]: time="2025-01-14T14:34:15.180166300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 14 14:34:15.180296 containerd[1697]: time="2025-01-14T14:34:15.180218600Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 14 14:34:15.180296 containerd[1697]: time="2025-01-14T14:34:15.180241300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 14 14:34:15.181336 containerd[1697]: time="2025-01-14T14:34:15.181297900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 14 14:34:15.181790 containerd[1697]: time="2025-01-14T14:34:15.181765200Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.181910100Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.181944900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.181969100Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.181989600Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182012900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182031800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182052700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182072500Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182090600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182107800Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182123700Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182150500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182170500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.182996 containerd[1697]: time="2025-01-14T14:34:15.182186900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182207000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182225900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182244300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182260900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182281100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182303500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182323800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182340400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182357000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182373200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182393700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182421400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182439400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.183502 containerd[1697]: time="2025-01-14T14:34:15.182456500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183548500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183578000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183668400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183687000Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183703400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183721700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183736400Z" level=info msg="NRI interface is disabled by configuration." Jan 14 14:34:15.184012 containerd[1697]: time="2025-01-14T14:34:15.183750400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 14 14:34:15.184284 containerd[1697]: time="2025-01-14T14:34:15.184150400Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 14 14:34:15.184284 containerd[1697]: time="2025-01-14T14:34:15.184231800Z" level=info msg="Connect containerd service" Jan 14 14:34:15.184284 containerd[1697]: time="2025-01-14T14:34:15.184276900Z" level=info msg="using legacy CRI server" Jan 14 14:34:15.184570 containerd[1697]: time="2025-01-14T14:34:15.184287300Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 14:34:15.184570 containerd[1697]: time="2025-01-14T14:34:15.184421800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 14 14:34:15.187989 containerd[1697]: time="2025-01-14T14:34:15.186176800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 14:34:15.188796 containerd[1697]: time="2025-01-14T14:34:15.188081000Z" level=info msg="Start subscribing containerd event" Jan 14 14:34:15.188796 containerd[1697]: time="2025-01-14T14:34:15.188164300Z" level=info msg="Start recovering state" Jan 14 14:34:15.188796 containerd[1697]: time="2025-01-14T14:34:15.188236800Z" level=info msg="Start event monitor" Jan 14 14:34:15.188796 containerd[1697]: time="2025-01-14T14:34:15.188256300Z" level=info msg="Start snapshots syncer" Jan 14 14:34:15.188796 containerd[1697]: time="2025-01-14T14:34:15.188268000Z" level=info msg="Start cni network conf syncer for default" Jan 14 14:34:15.188796 containerd[1697]: time="2025-01-14T14:34:15.188277900Z" level=info msg="Start streaming server" Jan 14 14:34:15.189053 containerd[1697]: time="2025-01-14T14:34:15.188961600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 14:34:15.191704 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 14:34:15.193067 containerd[1697]: time="2025-01-14T14:34:15.189080100Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 14:34:15.194530 containerd[1697]: time="2025-01-14T14:34:15.194502900Z" level=info msg="containerd successfully booted in 0.141839s" Jan 14 14:34:15.200438 sshd_keygen[1705]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 14:34:15.242591 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 14:34:15.259644 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 14:34:15.267614 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jan 14 14:34:15.279727 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 14:34:15.279955 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 14:34:15.291613 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 14:34:15.326651 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jan 14 14:34:15.330183 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 14:34:15.339731 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 14:34:15.344783 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 14 14:34:15.353918 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 14:34:15.363152 tar[1688]: linux-amd64/LICENSE Jan 14 14:34:15.363506 tar[1688]: linux-amd64/README.md Jan 14 14:34:15.375927 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 14:34:15.957649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:15.963241 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 14:34:15.966688 systemd[1]: Startup finished in 551ms (firmware) + 7.898s (loader) + 894ms (kernel) + 7.960s (initrd) + 6.416s (userspace) = 23.720s. Jan 14 14:34:15.974331 (kubelet)[1812]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:16.150870 waagent[1798]: 2025-01-14T14:34:16.150764Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jan 14 14:34:16.151922 waagent[1798]: 2025-01-14T14:34:16.151796Z INFO Daemon Daemon OS: flatcar 4081.3.0 Jan 14 14:34:16.153106 waagent[1798]: 2025-01-14T14:34:16.153060Z INFO Daemon Daemon Python: 3.11.9 Jan 14 14:34:16.154088 waagent[1798]: 2025-01-14T14:34:16.154043Z INFO Daemon Daemon Run daemon Jan 14 14:34:16.155147 waagent[1798]: 2025-01-14T14:34:16.155110Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.3.0' Jan 14 14:34:16.155800 waagent[1798]: 2025-01-14T14:34:16.155761Z INFO Daemon Daemon Using waagent for provisioning Jan 14 14:34:16.156689 waagent[1798]: 2025-01-14T14:34:16.156653Z INFO Daemon Daemon Activate resource disk Jan 14 14:34:16.157314 waagent[1798]: 2025-01-14T14:34:16.157278Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jan 14 14:34:16.161624 waagent[1798]: 2025-01-14T14:34:16.161581Z INFO Daemon Daemon Found device: None Jan 14 14:34:16.162425 waagent[1798]: 2025-01-14T14:34:16.162389Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jan 14 14:34:16.163134 waagent[1798]: 2025-01-14T14:34:16.163101Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jan 14 14:34:16.166877 waagent[1798]: 2025-01-14T14:34:16.165403Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 14:34:16.166877 waagent[1798]: 2025-01-14T14:34:16.166061Z INFO Daemon Daemon Running default provisioning handler Jan 14 14:34:16.173756 login[1800]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 14:34:16.175688 login[1801]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 14 14:34:16.188141 waagent[1798]: 2025-01-14T14:34:16.188077Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Jan 14 14:34:16.190450 waagent[1798]: 2025-01-14T14:34:16.190404Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jan 14 14:34:16.191068 waagent[1798]: 2025-01-14T14:34:16.191031Z INFO Daemon Daemon cloud-init is enabled: False Jan 14 14:34:16.192010 waagent[1798]: 2025-01-14T14:34:16.191976Z INFO Daemon Daemon Copying ovf-env.xml Jan 14 14:34:16.204613 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 14:34:16.215965 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 14:34:16.225331 systemd-logind[1672]: New session 1 of user core. Jan 14 14:34:16.233184 systemd-logind[1672]: New session 2 of user core. Jan 14 14:34:16.241336 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 14:34:16.251521 waagent[1798]: 2025-01-14T14:34:16.248875Z INFO Daemon Daemon Successfully mounted dvd Jan 14 14:34:16.252139 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 14:34:16.264395 (systemd)[1824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 14:34:16.273097 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jan 14 14:34:16.281785 waagent[1798]: 2025-01-14T14:34:16.275836Z INFO Daemon Daemon Detect protocol endpoint Jan 14 14:34:16.281785 waagent[1798]: 2025-01-14T14:34:16.279065Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jan 14 14:34:16.285079 waagent[1798]: 2025-01-14T14:34:16.282159Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jan 14 14:34:16.285079 waagent[1798]: 2025-01-14T14:34:16.282351Z INFO Daemon Daemon Test for route to 168.63.129.16 Jan 14 14:34:16.285079 waagent[1798]: 2025-01-14T14:34:16.283301Z INFO Daemon Daemon Route to 168.63.129.16 exists Jan 14 14:34:16.285079 waagent[1798]: 2025-01-14T14:34:16.283985Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jan 14 14:34:16.297978 waagent[1798]: 2025-01-14T14:34:16.297924Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jan 14 14:34:16.304677 waagent[1798]: 2025-01-14T14:34:16.298382Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jan 14 14:34:16.304677 waagent[1798]: 2025-01-14T14:34:16.299029Z INFO Daemon Daemon Server preferred version:2015-04-05 Jan 14 14:34:16.441106 waagent[1798]: 2025-01-14T14:34:16.438336Z INFO Daemon Daemon Initializing goal state during protocol detection Jan 14 14:34:16.441106 waagent[1798]: 2025-01-14T14:34:16.438741Z INFO Daemon Daemon Forcing an update of the goal state. Jan 14 14:34:16.449118 waagent[1798]: 2025-01-14T14:34:16.447401Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 14:34:16.463860 waagent[1798]: 2025-01-14T14:34:16.463810Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Jan 14 14:34:16.466517 waagent[1798]: 2025-01-14T14:34:16.466123Z INFO Daemon Jan 14 14:34:16.466935 waagent[1798]: 2025-01-14T14:34:16.466897Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 65078ad4-431d-4464-83d1-d09279cf2b21 eTag: 927017263460739116 source: Fabric] Jan 14 14:34:16.469682 waagent[1798]: 2025-01-14T14:34:16.468020Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jan 14 14:34:16.470568 waagent[1798]: 2025-01-14T14:34:16.470527Z INFO Daemon Jan 14 14:34:16.471082 waagent[1798]: 2025-01-14T14:34:16.471046Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jan 14 14:34:16.477402 waagent[1798]: 2025-01-14T14:34:16.477366Z INFO Daemon Daemon Downloading artifacts profile blob Jan 14 14:34:16.553602 systemd[1824]: Queued start job for default target default.target. Jan 14 14:34:16.561791 systemd[1824]: Created slice app.slice - User Application Slice. Jan 14 14:34:16.561834 systemd[1824]: Reached target paths.target - Paths. Jan 14 14:34:16.561853 systemd[1824]: Reached target timers.target - Timers. Jan 14 14:34:16.564517 systemd[1824]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 14:34:16.579480 waagent[1798]: 2025-01-14T14:34:16.578721Z INFO Daemon Downloaded certificate {'thumbprint': 'DFA6D543C3C651562F31907BB0A5391D56F88145', 'hasPrivateKey': True} Jan 14 14:34:16.583602 waagent[1798]: 2025-01-14T14:34:16.583538Z INFO Daemon Fetch goal state completed Jan 14 14:34:16.596960 waagent[1798]: 2025-01-14T14:34:16.594036Z INFO Daemon Daemon Starting provisioning Jan 14 14:34:16.594224 systemd[1824]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 14:34:16.594359 systemd[1824]: Reached target sockets.target - Sockets. Jan 14 14:34:16.594375 systemd[1824]: Reached target basic.target - Basic System. Jan 14 14:34:16.594410 systemd[1824]: Reached target default.target - Main User Target. Jan 14 14:34:16.594440 systemd[1824]: Startup finished in 317ms. Jan 14 14:34:16.595261 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 14:34:16.600835 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 14:34:16.602560 waagent[1798]: 2025-01-14T14:34:16.597701Z INFO Daemon Daemon Handle ovf-env.xml. Jan 14 14:34:16.602560 waagent[1798]: 2025-01-14T14:34:16.599801Z INFO Daemon Daemon Set hostname [ci-4081.3.0-a-0bb245c6fa] Jan 14 14:34:16.601613 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 14:34:16.611717 waagent[1798]: 2025-01-14T14:34:16.609974Z INFO Daemon Daemon Publish hostname [ci-4081.3.0-a-0bb245c6fa] Jan 14 14:34:16.620094 waagent[1798]: 2025-01-14T14:34:16.614102Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jan 14 14:34:16.620094 waagent[1798]: 2025-01-14T14:34:16.615170Z INFO Daemon Daemon Primary interface is [eth0] Jan 14 14:34:16.636091 systemd-networkd[1575]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 14 14:34:16.636242 systemd-networkd[1575]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 14:34:16.636299 systemd-networkd[1575]: eth0: DHCP lease lost Jan 14 14:34:16.646523 waagent[1798]: 2025-01-14T14:34:16.637879Z INFO Daemon Daemon Create user account if not exists Jan 14 14:34:16.646523 waagent[1798]: 2025-01-14T14:34:16.638606Z INFO Daemon Daemon User core already exists, skip useradd Jan 14 14:34:16.646523 waagent[1798]: 2025-01-14T14:34:16.639273Z INFO Daemon Daemon Configure sudoer Jan 14 14:34:16.646523 waagent[1798]: 2025-01-14T14:34:16.640246Z INFO Daemon Daemon Configure sshd Jan 14 14:34:16.646523 waagent[1798]: 2025-01-14T14:34:16.641616Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jan 14 14:34:16.646523 waagent[1798]: 2025-01-14T14:34:16.642435Z INFO Daemon Daemon Deploy ssh public key. Jan 14 14:34:16.648997 systemd-networkd[1575]: eth0: DHCPv6 lease lost Jan 14 14:34:16.675593 systemd-networkd[1575]: eth0: DHCPv4 address 10.200.8.34/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jan 14 14:34:16.982769 kubelet[1812]: E0114 14:34:16.982634 1812 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:16.985322 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:16.985541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:16.985923 systemd[1]: kubelet.service: Consumed 1.011s CPU time. Jan 14 14:34:17.755812 waagent[1798]: 2025-01-14T14:34:17.755730Z INFO Daemon Daemon Provisioning complete Jan 14 14:34:17.770151 waagent[1798]: 2025-01-14T14:34:17.770095Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jan 14 14:34:17.776001 waagent[1798]: 2025-01-14T14:34:17.770390Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jan 14 14:34:17.776001 waagent[1798]: 2025-01-14T14:34:17.771085Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jan 14 14:34:17.899065 waagent[1872]: 2025-01-14T14:34:17.898962Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jan 14 14:34:17.899530 waagent[1872]: 2025-01-14T14:34:17.899137Z INFO ExtHandler ExtHandler OS: flatcar 4081.3.0 Jan 14 14:34:17.899530 waagent[1872]: 2025-01-14T14:34:17.899221Z INFO ExtHandler ExtHandler Python: 3.11.9 Jan 14 14:34:17.920883 waagent[1872]: 2025-01-14T14:34:17.920781Z INFO ExtHandler ExtHandler Distro: flatcar-4081.3.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jan 14 14:34:17.921127 waagent[1872]: 2025-01-14T14:34:17.921074Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 14:34:17.921217 waagent[1872]: 2025-01-14T14:34:17.921179Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 14:34:17.929015 waagent[1872]: 2025-01-14T14:34:17.928943Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jan 14 14:34:17.934755 waagent[1872]: 2025-01-14T14:34:17.934699Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Jan 14 14:34:17.935240 waagent[1872]: 2025-01-14T14:34:17.935183Z INFO ExtHandler Jan 14 14:34:17.935324 waagent[1872]: 2025-01-14T14:34:17.935275Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 897f72dd-bfb5-4b44-893a-f20747299488 eTag: 927017263460739116 source: Fabric] Jan 14 14:34:17.935643 waagent[1872]: 2025-01-14T14:34:17.935593Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jan 14 14:34:17.936186 waagent[1872]: 2025-01-14T14:34:17.936137Z INFO ExtHandler Jan 14 14:34:17.936254 waagent[1872]: 2025-01-14T14:34:17.936220Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jan 14 14:34:17.939559 waagent[1872]: 2025-01-14T14:34:17.939501Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jan 14 14:34:18.000257 waagent[1872]: 2025-01-14T14:34:18.000161Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DFA6D543C3C651562F31907BB0A5391D56F88145', 'hasPrivateKey': True} Jan 14 14:34:18.000821 waagent[1872]: 2025-01-14T14:34:18.000759Z INFO ExtHandler Fetch goal state completed Jan 14 14:34:18.016634 waagent[1872]: 2025-01-14T14:34:18.016514Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1872 Jan 14 14:34:18.016739 waagent[1872]: 2025-01-14T14:34:18.016692Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jan 14 14:34:18.018309 waagent[1872]: 2025-01-14T14:34:18.018249Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.3.0', '', 'Flatcar Container Linux by Kinvolk'] Jan 14 14:34:18.018684 waagent[1872]: 2025-01-14T14:34:18.018634Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jan 14 14:34:18.029876 waagent[1872]: 2025-01-14T14:34:18.029836Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jan 14 14:34:18.030062 waagent[1872]: 2025-01-14T14:34:18.030017Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jan 14 14:34:18.036713 waagent[1872]: 2025-01-14T14:34:18.036642Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jan 14 14:34:18.043667 systemd[1]: Reloading requested from client PID 1885 ('systemctl') (unit waagent.service)... Jan 14 14:34:18.043683 systemd[1]: Reloading... Jan 14 14:34:18.136776 zram_generator::config[1919]: No configuration found. Jan 14 14:34:18.258678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:34:18.338298 systemd[1]: Reloading finished in 294 ms. Jan 14 14:34:18.366891 waagent[1872]: 2025-01-14T14:34:18.366779Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jan 14 14:34:18.374411 systemd[1]: Reloading requested from client PID 1976 ('systemctl') (unit waagent.service)... Jan 14 14:34:18.374429 systemd[1]: Reloading... Jan 14 14:34:18.467519 zram_generator::config[2006]: No configuration found. Jan 14 14:34:18.591387 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:34:18.673620 systemd[1]: Reloading finished in 298 ms. Jan 14 14:34:18.701613 waagent[1872]: 2025-01-14T14:34:18.697630Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jan 14 14:34:18.701613 waagent[1872]: 2025-01-14T14:34:18.697832Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jan 14 14:34:18.859655 waagent[1872]: 2025-01-14T14:34:18.859498Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jan 14 14:34:18.860205 waagent[1872]: 2025-01-14T14:34:18.860138Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jan 14 14:34:18.861054 waagent[1872]: 2025-01-14T14:34:18.860989Z INFO ExtHandler ExtHandler Starting env monitor service. Jan 14 14:34:18.861460 waagent[1872]: 2025-01-14T14:34:18.861406Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jan 14 14:34:18.861612 waagent[1872]: 2025-01-14T14:34:18.861567Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 14:34:18.861723 waagent[1872]: 2025-01-14T14:34:18.861678Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 14:34:18.862101 waagent[1872]: 2025-01-14T14:34:18.862043Z INFO EnvHandler ExtHandler Configure routes Jan 14 14:34:18.862215 waagent[1872]: 2025-01-14T14:34:18.862168Z INFO EnvHandler ExtHandler Gateway:None Jan 14 14:34:18.862298 waagent[1872]: 2025-01-14T14:34:18.862260Z INFO EnvHandler ExtHandler Routes:None Jan 14 14:34:18.862608 waagent[1872]: 2025-01-14T14:34:18.862559Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jan 14 14:34:18.862826 waagent[1872]: 2025-01-14T14:34:18.862789Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jan 14 14:34:18.863205 waagent[1872]: 2025-01-14T14:34:18.863138Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jan 14 14:34:18.864150 waagent[1872]: 2025-01-14T14:34:18.863530Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jan 14 14:34:18.864150 waagent[1872]: 2025-01-14T14:34:18.863806Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jan 14 14:34:18.864150 waagent[1872]: 2025-01-14T14:34:18.864041Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jan 14 14:34:18.864150 waagent[1872]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jan 14 14:34:18.864150 waagent[1872]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jan 14 14:34:18.864150 waagent[1872]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jan 14 14:34:18.864150 waagent[1872]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jan 14 14:34:18.864150 waagent[1872]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 14:34:18.864150 waagent[1872]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jan 14 14:34:18.864855 waagent[1872]: 2025-01-14T14:34:18.864797Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jan 14 14:34:18.864992 waagent[1872]: 2025-01-14T14:34:18.864948Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jan 14 14:34:18.865833 waagent[1872]: 2025-01-14T14:34:18.865781Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jan 14 14:34:18.874409 waagent[1872]: 2025-01-14T14:34:18.874352Z INFO ExtHandler ExtHandler Jan 14 14:34:18.875130 waagent[1872]: 2025-01-14T14:34:18.875092Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 57da948f-9e25-4c2c-bce7-c0ad43a32ea2 correlation fa2dc000-1012-4d83-a870-7ee501c918f4 created: 2025-01-14T14:33:42.146774Z] Jan 14 14:34:18.875746 waagent[1872]: 2025-01-14T14:34:18.875697Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jan 14 14:34:18.876651 waagent[1872]: 2025-01-14T14:34:18.876603Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jan 14 14:34:18.890784 waagent[1872]: 2025-01-14T14:34:18.890718Z INFO MonitorHandler ExtHandler Network interfaces: Jan 14 14:34:18.890784 waagent[1872]: Executing ['ip', '-a', '-o', 'link']: Jan 14 14:34:18.890784 waagent[1872]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jan 14 14:34:18.890784 waagent[1872]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:29:5d brd ff:ff:ff:ff:ff:ff Jan 14 14:34:18.890784 waagent[1872]: 3: enP57844s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:29:5d brd ff:ff:ff:ff:ff:ff\ altname enP57844p0s2 Jan 14 14:34:18.890784 waagent[1872]: Executing ['ip', '-4', '-a', '-o', 'address']: Jan 14 14:34:18.890784 waagent[1872]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jan 14 14:34:18.890784 waagent[1872]: 2: eth0 inet 10.200.8.34/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jan 14 14:34:18.890784 waagent[1872]: Executing ['ip', '-6', '-a', '-o', 'address']: Jan 14 14:34:18.890784 waagent[1872]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Jan 14 14:34:18.890784 waagent[1872]: 2: eth0 inet6 fe80::7e1e:52ff:fe20:295d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 14:34:18.890784 waagent[1872]: 3: enP57844s1 inet6 fe80::7e1e:52ff:fe20:295d/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jan 14 14:34:18.915077 waagent[1872]: 2025-01-14T14:34:18.915012Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: ADB3E108-F421-4E79-9300-40CC3CE3D790;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jan 14 14:34:18.929246 waagent[1872]: 2025-01-14T14:34:18.929183Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jan 14 14:34:18.929246 waagent[1872]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:34:18.929246 waagent[1872]: pkts bytes target prot opt in out source destination Jan 14 14:34:18.929246 waagent[1872]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:34:18.929246 waagent[1872]: pkts bytes target prot opt in out source destination Jan 14 14:34:18.929246 waagent[1872]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:34:18.929246 waagent[1872]: pkts bytes target prot opt in out source destination Jan 14 14:34:18.929246 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 14:34:18.929246 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 14:34:18.929246 waagent[1872]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 14:34:18.932773 waagent[1872]: 2025-01-14T14:34:18.932710Z INFO EnvHandler ExtHandler Current Firewall rules: Jan 14 14:34:18.932773 waagent[1872]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:34:18.932773 waagent[1872]: pkts bytes target prot opt in out source destination Jan 14 14:34:18.932773 waagent[1872]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:34:18.932773 waagent[1872]: pkts bytes target prot opt in out source destination Jan 14 14:34:18.932773 waagent[1872]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jan 14 14:34:18.932773 waagent[1872]: pkts bytes target prot opt in out source destination Jan 14 14:34:18.932773 waagent[1872]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jan 14 14:34:18.932773 waagent[1872]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jan 14 14:34:18.932773 waagent[1872]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jan 14 14:34:18.933174 waagent[1872]: 2025-01-14T14:34:18.933041Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jan 14 14:34:27.043689 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 14:34:27.050727 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:27.145907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:27.154799 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:27.739402 kubelet[2106]: E0114 14:34:27.739332 2106 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:27.743793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:27.744019 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:37.793814 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 14:34:37.806708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:37.898501 chronyd[1669]: Selected source PHC0 Jan 14 14:34:37.904662 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:37.906394 (kubelet)[2122]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:37.943483 kubelet[2122]: E0114 14:34:37.943418 2122 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:37.945978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:37.946198 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:48.043840 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 14:34:48.055720 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:48.147900 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:48.152635 (kubelet)[2138]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:48.746328 kubelet[2138]: E0114 14:34:48.746273 2138 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:48.749151 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:48.749371 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:49.082845 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 14:34:49.084224 systemd[1]: Started sshd@0-10.200.8.34:22-10.200.16.10:57342.service - OpenSSH per-connection server daemon (10.200.16.10:57342). Jan 14 14:34:49.742119 sshd[2147]: Accepted publickey for core from 10.200.16.10 port 57342 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:49.743976 sshd[2147]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:49.749542 systemd-logind[1672]: New session 3 of user core. Jan 14 14:34:49.758661 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 14:34:50.304661 systemd[1]: Started sshd@1-10.200.8.34:22-10.200.16.10:57346.service - OpenSSH per-connection server daemon (10.200.16.10:57346). Jan 14 14:34:50.941993 sshd[2152]: Accepted publickey for core from 10.200.16.10 port 57346 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:50.943856 sshd[2152]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:50.948514 systemd-logind[1672]: New session 4 of user core. Jan 14 14:34:50.956656 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 14:34:51.399174 sshd[2152]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:51.402882 systemd[1]: sshd@1-10.200.8.34:22-10.200.16.10:57346.service: Deactivated successfully. Jan 14 14:34:51.405241 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 14:34:51.406659 systemd-logind[1672]: Session 4 logged out. Waiting for processes to exit. Jan 14 14:34:51.407747 systemd-logind[1672]: Removed session 4. Jan 14 14:34:51.511443 systemd[1]: Started sshd@2-10.200.8.34:22-10.200.16.10:57362.service - OpenSSH per-connection server daemon (10.200.16.10:57362). Jan 14 14:34:52.150096 sshd[2159]: Accepted publickey for core from 10.200.16.10 port 57362 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:52.151862 sshd[2159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:52.155911 systemd-logind[1672]: New session 5 of user core. Jan 14 14:34:52.164650 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 14:34:52.600136 sshd[2159]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:52.603357 systemd[1]: sshd@2-10.200.8.34:22-10.200.16.10:57362.service: Deactivated successfully. Jan 14 14:34:52.605485 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 14:34:52.607084 systemd-logind[1672]: Session 5 logged out. Waiting for processes to exit. Jan 14 14:34:52.608221 systemd-logind[1672]: Removed session 5. Jan 14 14:34:52.716456 systemd[1]: Started sshd@3-10.200.8.34:22-10.200.16.10:57372.service - OpenSSH per-connection server daemon (10.200.16.10:57372). Jan 14 14:34:53.366451 sshd[2166]: Accepted publickey for core from 10.200.16.10 port 57372 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:53.368325 sshd[2166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:53.373972 systemd-logind[1672]: New session 6 of user core. Jan 14 14:34:53.383649 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 14:34:53.828909 sshd[2166]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:53.833495 systemd[1]: sshd@3-10.200.8.34:22-10.200.16.10:57372.service: Deactivated successfully. Jan 14 14:34:53.835750 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 14:34:53.836518 systemd-logind[1672]: Session 6 logged out. Waiting for processes to exit. Jan 14 14:34:53.837450 systemd-logind[1672]: Removed session 6. Jan 14 14:34:53.941577 systemd[1]: Started sshd@4-10.200.8.34:22-10.200.16.10:57374.service - OpenSSH per-connection server daemon (10.200.16.10:57374). Jan 14 14:34:54.580225 sshd[2173]: Accepted publickey for core from 10.200.16.10 port 57374 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:54.581899 sshd[2173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:54.585925 systemd-logind[1672]: New session 7 of user core. Jan 14 14:34:54.595616 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 14:34:54.974155 sudo[2176]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 14:34:54.974553 sudo[2176]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:54.994338 sudo[2176]: pam_unix(sudo:session): session closed for user root Jan 14 14:34:55.101645 sshd[2173]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:55.106807 systemd[1]: sshd@4-10.200.8.34:22-10.200.16.10:57374.service: Deactivated successfully. Jan 14 14:34:55.109119 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 14:34:55.110043 systemd-logind[1672]: Session 7 logged out. Waiting for processes to exit. Jan 14 14:34:55.111292 systemd-logind[1672]: Removed session 7. Jan 14 14:34:55.219043 systemd[1]: Started sshd@5-10.200.8.34:22-10.200.16.10:57388.service - OpenSSH per-connection server daemon (10.200.16.10:57388). Jan 14 14:34:55.853846 sshd[2181]: Accepted publickey for core from 10.200.16.10 port 57388 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:55.855769 sshd[2181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:55.860864 systemd-logind[1672]: New session 8 of user core. Jan 14 14:34:55.870635 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 14:34:56.205396 sudo[2185]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 14:34:56.206191 sudo[2185]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:56.209543 sudo[2185]: pam_unix(sudo:session): session closed for user root Jan 14 14:34:56.214943 sudo[2184]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 14 14:34:56.215299 sudo[2184]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:56.233901 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 14 14:34:56.235762 auditctl[2188]: No rules Jan 14 14:34:56.236156 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 14:34:56.236369 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 14 14:34:56.243007 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 14 14:34:56.266414 augenrules[2206]: No rules Jan 14 14:34:56.268034 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 14 14:34:56.269843 sudo[2184]: pam_unix(sudo:session): session closed for user root Jan 14 14:34:56.373084 sshd[2181]: pam_unix(sshd:session): session closed for user core Jan 14 14:34:56.376902 systemd[1]: sshd@5-10.200.8.34:22-10.200.16.10:57388.service: Deactivated successfully. Jan 14 14:34:56.379317 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 14:34:56.380961 systemd-logind[1672]: Session 8 logged out. Waiting for processes to exit. Jan 14 14:34:56.381859 systemd-logind[1672]: Removed session 8. Jan 14 14:34:56.490715 systemd[1]: Started sshd@6-10.200.8.34:22-10.200.16.10:50090.service - OpenSSH per-connection server daemon (10.200.16.10:50090). Jan 14 14:34:57.130288 sshd[2214]: Accepted publickey for core from 10.200.16.10 port 50090 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:34:57.132139 sshd[2214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:34:57.137519 systemd-logind[1672]: New session 9 of user core. Jan 14 14:34:57.142636 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 14:34:57.481711 sudo[2217]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 14:34:57.482176 sudo[2217]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 14:34:58.793395 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 14 14:34:58.799736 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:34:59.572451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:34:59.583786 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:34:59.631772 kubelet[2239]: E0114 14:34:59.631717 2239 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:34:59.634223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:34:59.634445 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:34:59.698615 update_engine[1674]: I20250114 14:34:59.697585 1674 update_attempter.cc:509] Updating boot flags... Jan 14 14:34:59.749505 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2259) Jan 14 14:34:59.872806 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2260) Jan 14 14:35:00.085798 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 14:35:00.087314 (dockerd)[2314]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 14:35:01.548049 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jan 14 14:35:03.051177 dockerd[2314]: time="2025-01-14T14:35:03.051110048Z" level=info msg="Starting up" Jan 14 14:35:04.114135 dockerd[2314]: time="2025-01-14T14:35:04.114085035Z" level=info msg="Loading containers: start." Jan 14 14:35:04.386681 kernel: Initializing XFRM netlink socket Jan 14 14:35:04.585765 systemd-networkd[1575]: docker0: Link UP Jan 14 14:35:04.612683 dockerd[2314]: time="2025-01-14T14:35:04.612647583Z" level=info msg="Loading containers: done." Jan 14 14:35:04.749265 dockerd[2314]: time="2025-01-14T14:35:04.749206794Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 14:35:04.749591 dockerd[2314]: time="2025-01-14T14:35:04.749413900Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 14 14:35:04.749709 dockerd[2314]: time="2025-01-14T14:35:04.749605005Z" level=info msg="Daemon has completed initialization" Jan 14 14:35:04.819825 dockerd[2314]: time="2025-01-14T14:35:04.819379201Z" level=info msg="API listen on /run/docker.sock" Jan 14 14:35:04.819774 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 14:35:06.134958 containerd[1697]: time="2025-01-14T14:35:06.134904252Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\"" Jan 14 14:35:07.024173 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827792182.mount: Deactivated successfully. Jan 14 14:35:08.820880 containerd[1697]: time="2025-01-14T14:35:08.820810933Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:08.824278 containerd[1697]: time="2025-01-14T14:35:08.824194137Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.8: active requests=0, bytes read=32675650" Jan 14 14:35:08.832485 containerd[1697]: time="2025-01-14T14:35:08.831744269Z" level=info msg="ImageCreate event name:\"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:08.838498 containerd[1697]: time="2025-01-14T14:35:08.838445374Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:08.840103 containerd[1697]: time="2025-01-14T14:35:08.839596410Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.8\" with image id \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f0e1b3de0c2e98e6c6abd73edf9d3b8e4d44460656cde0ebb92e2d9206961fcb\", size \"32672442\" in 2.704638756s" Jan 14 14:35:08.840103 containerd[1697]: time="2025-01-14T14:35:08.839642911Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.8\" returns image reference \"sha256:772392d372035bf92e430e758ad0446146d82b7192358c8651252e4fb49c43dd\"" Jan 14 14:35:08.862247 containerd[1697]: time="2025-01-14T14:35:08.862207404Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\"" Jan 14 14:35:09.793633 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 14 14:35:09.801128 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:10.323728 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:10.336971 (kubelet)[2520]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:35:10.378321 kubelet[2520]: E0114 14:35:10.378198 2520 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:35:10.380775 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:35:10.380999 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:35:11.345632 containerd[1697]: time="2025-01-14T14:35:11.345566150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:11.350250 containerd[1697]: time="2025-01-14T14:35:11.350176514Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.8: active requests=0, bytes read=29606417" Jan 14 14:35:11.354367 containerd[1697]: time="2025-01-14T14:35:11.354336561Z" level=info msg="ImageCreate event name:\"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:11.361496 containerd[1697]: time="2025-01-14T14:35:11.361413413Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:11.362686 containerd[1697]: time="2025-01-14T14:35:11.362498651Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.8\" with image id \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:124f66b7e877eb5a80a40503057299bb60e6a5f2130905f4e3293dabf194c397\", size \"31051521\" in 2.500214745s" Jan 14 14:35:11.362686 containerd[1697]: time="2025-01-14T14:35:11.362542953Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.8\" returns image reference \"sha256:85333d41dd3ce32d8344280c6d533d4c8f66252e4c28e332a2322ba3837f7bd6\"" Jan 14 14:35:11.383891 containerd[1697]: time="2025-01-14T14:35:11.383853309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\"" Jan 14 14:35:12.772423 containerd[1697]: time="2025-01-14T14:35:12.772365501Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:12.775480 containerd[1697]: time="2025-01-14T14:35:12.775409209Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.8: active requests=0, bytes read=17783043" Jan 14 14:35:12.780370 containerd[1697]: time="2025-01-14T14:35:12.780310483Z" level=info msg="ImageCreate event name:\"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:12.787872 containerd[1697]: time="2025-01-14T14:35:12.787813250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:12.789019 containerd[1697]: time="2025-01-14T14:35:12.788805985Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.8\" with image id \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:c8bdeac2590c99c1a77e33995423ddb6633ff90a82a2aa455442e0a8079ef8c7\", size \"19228165\" in 1.404908374s" Jan 14 14:35:12.789019 containerd[1697]: time="2025-01-14T14:35:12.788903188Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.8\" returns image reference \"sha256:eb53b988d5e03f329b5fdba21cbbbae48e1619b199689e7448095b31843b2c43\"" Jan 14 14:35:12.812402 containerd[1697]: time="2025-01-14T14:35:12.812355521Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Jan 14 14:35:13.930211 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631099921.mount: Deactivated successfully. Jan 14 14:35:14.422956 containerd[1697]: time="2025-01-14T14:35:14.422894895Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:14.426943 containerd[1697]: time="2025-01-14T14:35:14.426862736Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Jan 14 14:35:14.430810 containerd[1697]: time="2025-01-14T14:35:14.430752274Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:14.434432 containerd[1697]: time="2025-01-14T14:35:14.434376802Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:14.435229 containerd[1697]: time="2025-01-14T14:35:14.434977624Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 1.622579801s" Jan 14 14:35:14.435229 containerd[1697]: time="2025-01-14T14:35:14.435019025Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Jan 14 14:35:14.457630 containerd[1697]: time="2025-01-14T14:35:14.457586026Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 14 14:35:14.998709 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount858426708.mount: Deactivated successfully. Jan 14 14:35:16.485227 containerd[1697]: time="2025-01-14T14:35:16.485158215Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:16.492785 containerd[1697]: time="2025-01-14T14:35:16.492716852Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jan 14 14:35:16.497220 containerd[1697]: time="2025-01-14T14:35:16.497159391Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:16.502584 containerd[1697]: time="2025-01-14T14:35:16.502542059Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:16.503845 containerd[1697]: time="2025-01-14T14:35:16.503666595Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.046029066s" Jan 14 14:35:16.503845 containerd[1697]: time="2025-01-14T14:35:16.503711896Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 14 14:35:16.526269 containerd[1697]: time="2025-01-14T14:35:16.526230201Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 14 14:35:17.029348 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819290798.mount: Deactivated successfully. Jan 14 14:35:17.059702 containerd[1697]: time="2025-01-14T14:35:17.059649101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:17.062370 containerd[1697]: time="2025-01-14T14:35:17.062309985Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jan 14 14:35:17.069889 containerd[1697]: time="2025-01-14T14:35:17.069837720Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:17.076553 containerd[1697]: time="2025-01-14T14:35:17.076501929Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:17.077384 containerd[1697]: time="2025-01-14T14:35:17.077223352Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 550.950749ms" Jan 14 14:35:17.077384 containerd[1697]: time="2025-01-14T14:35:17.077262953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 14 14:35:17.099632 containerd[1697]: time="2025-01-14T14:35:17.099447247Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 14 14:35:17.688712 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2024962748.mount: Deactivated successfully. Jan 14 14:35:19.998162 containerd[1697]: time="2025-01-14T14:35:19.998099913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:20.001233 containerd[1697]: time="2025-01-14T14:35:20.001157994Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Jan 14 14:35:20.009656 containerd[1697]: time="2025-01-14T14:35:20.009598120Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:20.016422 containerd[1697]: time="2025-01-14T14:35:20.016364201Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:20.017971 containerd[1697]: time="2025-01-14T14:35:20.017515132Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.917999683s" Jan 14 14:35:20.017971 containerd[1697]: time="2025-01-14T14:35:20.017556333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 14 14:35:20.543991 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jan 14 14:35:20.553708 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:20.644959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:20.649225 (kubelet)[2676]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 14:35:20.686871 kubelet[2676]: E0114 14:35:20.686813 2676 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 14:35:20.689344 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 14:35:20.689585 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 14:35:23.607921 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:23.614796 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:23.644281 systemd[1]: Reloading requested from client PID 2740 ('systemctl') (unit session-9.scope)... Jan 14 14:35:23.644310 systemd[1]: Reloading... Jan 14 14:35:23.765499 zram_generator::config[2776]: No configuration found. Jan 14 14:35:23.902818 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:35:23.982644 systemd[1]: Reloading finished in 337 ms. Jan 14 14:35:24.047627 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:24.050620 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 14:35:24.050872 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:24.055850 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:24.298928 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:24.314859 (kubelet)[2852]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 14:35:24.356501 kubelet[2852]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:35:24.356501 kubelet[2852]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 14:35:24.356501 kubelet[2852]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:35:24.356501 kubelet[2852]: I0114 14:35:24.356150 2852 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 14:35:25.301349 kubelet[2852]: I0114 14:35:25.301305 2852 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 14:35:25.301349 kubelet[2852]: I0114 14:35:25.301338 2852 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 14:35:25.301638 kubelet[2852]: I0114 14:35:25.301618 2852 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 14:35:25.324822 kubelet[2852]: E0114 14:35:25.324794 2852 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.325722 kubelet[2852]: I0114 14:35:25.325584 2852 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 14:35:25.337478 kubelet[2852]: I0114 14:35:25.337450 2852 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 14:35:25.337780 kubelet[2852]: I0114 14:35:25.337742 2852 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 14:35:25.337966 kubelet[2852]: I0114 14:35:25.337778 2852 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-0bb245c6fa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 14:35:25.338121 kubelet[2852]: I0114 14:35:25.337980 2852 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 14:35:25.338121 kubelet[2852]: I0114 14:35:25.337993 2852 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 14:35:25.338203 kubelet[2852]: I0114 14:35:25.338147 2852 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:35:25.339121 kubelet[2852]: I0114 14:35:25.339100 2852 kubelet.go:400] "Attempting to sync node with API server" Jan 14 14:35:25.339121 kubelet[2852]: I0114 14:35:25.339123 2852 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 14:35:25.339376 kubelet[2852]: I0114 14:35:25.339152 2852 kubelet.go:312] "Adding apiserver pod source" Jan 14 14:35:25.339376 kubelet[2852]: I0114 14:35:25.339174 2852 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 14:35:25.344400 kubelet[2852]: W0114 14:35:25.344354 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.344916 kubelet[2852]: E0114 14:35:25.344508 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.344916 kubelet[2852]: W0114 14:35:25.344824 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-0bb245c6fa&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.344916 kubelet[2852]: E0114 14:35:25.344871 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-0bb245c6fa&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.345153 kubelet[2852]: I0114 14:35:25.345128 2852 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 14 14:35:25.346949 kubelet[2852]: I0114 14:35:25.346924 2852 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 14:35:25.347068 kubelet[2852]: W0114 14:35:25.347035 2852 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 14:35:25.348892 kubelet[2852]: I0114 14:35:25.348246 2852 server.go:1264] "Started kubelet" Jan 14 14:35:25.355526 kubelet[2852]: I0114 14:35:25.355509 2852 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 14:35:25.358495 kubelet[2852]: E0114 14:35:25.357168 2852 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.34:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.34:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.3.0-a-0bb245c6fa.181a95cfa412f25b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.3.0-a-0bb245c6fa,UID:ci-4081.3.0-a-0bb245c6fa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.3.0-a-0bb245c6fa,},FirstTimestamp:2025-01-14 14:35:25.348221531 +0000 UTC m=+1.029760347,LastTimestamp:2025-01-14 14:35:25.348221531 +0000 UTC m=+1.029760347,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.3.0-a-0bb245c6fa,}" Jan 14 14:35:25.359835 kubelet[2852]: I0114 14:35:25.359787 2852 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 14:35:25.361398 kubelet[2852]: I0114 14:35:25.361381 2852 server.go:455] "Adding debug handlers to kubelet server" Jan 14 14:35:25.364600 kubelet[2852]: I0114 14:35:25.364548 2852 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 14:35:25.364865 kubelet[2852]: I0114 14:35:25.364808 2852 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 14:35:25.365001 kubelet[2852]: I0114 14:35:25.364987 2852 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 14:35:25.368178 kubelet[2852]: I0114 14:35:25.368162 2852 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 14:35:25.368327 kubelet[2852]: I0114 14:35:25.368315 2852 reconciler.go:26] "Reconciler: start to sync state" Jan 14 14:35:25.369519 kubelet[2852]: W0114 14:35:25.369438 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.369595 kubelet[2852]: E0114 14:35:25.369534 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.370076 kubelet[2852]: I0114 14:35:25.370047 2852 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 14:35:25.370529 kubelet[2852]: E0114 14:35:25.370490 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-0bb245c6fa?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="200ms" Jan 14 14:35:25.372214 kubelet[2852]: E0114 14:35:25.371960 2852 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 14:35:25.372571 kubelet[2852]: I0114 14:35:25.372549 2852 factory.go:221] Registration of the containerd container factory successfully Jan 14 14:35:25.372571 kubelet[2852]: I0114 14:35:25.372570 2852 factory.go:221] Registration of the systemd container factory successfully Jan 14 14:35:25.383622 kubelet[2852]: I0114 14:35:25.383595 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 14:35:25.385212 kubelet[2852]: I0114 14:35:25.384923 2852 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 14:35:25.385212 kubelet[2852]: I0114 14:35:25.384949 2852 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 14:35:25.385212 kubelet[2852]: I0114 14:35:25.384969 2852 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 14:35:25.385212 kubelet[2852]: E0114 14:35:25.385010 2852 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 14:35:25.390717 kubelet[2852]: W0114 14:35:25.390531 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.390717 kubelet[2852]: E0114 14:35:25.390587 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:25.416415 kubelet[2852]: I0114 14:35:25.416397 2852 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 14:35:25.416415 kubelet[2852]: I0114 14:35:25.416414 2852 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 14:35:25.416556 kubelet[2852]: I0114 14:35:25.416432 2852 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:35:25.423137 kubelet[2852]: I0114 14:35:25.423074 2852 policy_none.go:49] "None policy: Start" Jan 14 14:35:25.423854 kubelet[2852]: I0114 14:35:25.423826 2852 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 14:35:25.423854 kubelet[2852]: I0114 14:35:25.423852 2852 state_mem.go:35] "Initializing new in-memory state store" Jan 14 14:35:25.432344 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 14:35:25.445652 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 14:35:25.449210 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 14:35:25.458182 kubelet[2852]: I0114 14:35:25.458156 2852 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 14:35:25.458957 kubelet[2852]: I0114 14:35:25.458393 2852 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 14:35:25.458957 kubelet[2852]: I0114 14:35:25.458707 2852 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 14:35:25.461077 kubelet[2852]: E0114 14:35:25.461053 2852 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.3.0-a-0bb245c6fa\" not found" Jan 14 14:35:25.467232 kubelet[2852]: I0114 14:35:25.467201 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.467586 kubelet[2852]: E0114 14:35:25.467560 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.486224 kubelet[2852]: I0114 14:35:25.486081 2852 topology_manager.go:215] "Topology Admit Handler" podUID="186592f8b856d6425267503d411330a8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.487978 kubelet[2852]: I0114 14:35:25.487950 2852 topology_manager.go:215] "Topology Admit Handler" podUID="0b359362fd9859b1748a22e074816a11" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.490316 kubelet[2852]: I0114 14:35:25.489979 2852 topology_manager.go:215] "Topology Admit Handler" podUID="f276a7c464bae4ba9f2d935671260d21" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.497580 systemd[1]: Created slice kubepods-burstable-pod0b359362fd9859b1748a22e074816a11.slice - libcontainer container kubepods-burstable-pod0b359362fd9859b1748a22e074816a11.slice. Jan 14 14:35:25.511021 systemd[1]: Created slice kubepods-burstable-pod186592f8b856d6425267503d411330a8.slice - libcontainer container kubepods-burstable-pod186592f8b856d6425267503d411330a8.slice. Jan 14 14:35:25.524956 systemd[1]: Created slice kubepods-burstable-podf276a7c464bae4ba9f2d935671260d21.slice - libcontainer container kubepods-burstable-podf276a7c464bae4ba9f2d935671260d21.slice. Jan 14 14:35:25.571776 kubelet[2852]: E0114 14:35:25.571645 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-0bb245c6fa?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="400ms" Jan 14 14:35:25.669637 kubelet[2852]: I0114 14:35:25.669202 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669637 kubelet[2852]: I0114 14:35:25.669253 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669637 kubelet[2852]: I0114 14:35:25.669298 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669637 kubelet[2852]: I0114 14:35:25.669330 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/186592f8b856d6425267503d411330a8-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" (UID: \"186592f8b856d6425267503d411330a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669637 kubelet[2852]: I0114 14:35:25.669362 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/186592f8b856d6425267503d411330a8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" (UID: \"186592f8b856d6425267503d411330a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669993 kubelet[2852]: I0114 14:35:25.669392 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/186592f8b856d6425267503d411330a8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" (UID: \"186592f8b856d6425267503d411330a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669993 kubelet[2852]: I0114 14:35:25.669424 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669993 kubelet[2852]: I0114 14:35:25.669454 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.669993 kubelet[2852]: I0114 14:35:25.669499 2852 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f276a7c464bae4ba9f2d935671260d21-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-0bb245c6fa\" (UID: \"f276a7c464bae4ba9f2d935671260d21\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.670428 kubelet[2852]: I0114 14:35:25.670364 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.670826 kubelet[2852]: E0114 14:35:25.670786 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:25.810411 containerd[1697]: time="2025-01-14T14:35:25.810360477Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-0bb245c6fa,Uid:0b359362fd9859b1748a22e074816a11,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:25.824005 containerd[1697]: time="2025-01-14T14:35:25.823900995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-0bb245c6fa,Uid:186592f8b856d6425267503d411330a8,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:25.827878 containerd[1697]: time="2025-01-14T14:35:25.827826616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-0bb245c6fa,Uid:f276a7c464bae4ba9f2d935671260d21,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:25.972200 kubelet[2852]: E0114 14:35:25.972139 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-0bb245c6fa?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="800ms" Jan 14 14:35:26.073626 kubelet[2852]: I0114 14:35:26.073553 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:26.074280 kubelet[2852]: E0114 14:35:26.073937 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:26.208989 kubelet[2852]: W0114 14:35:26.208921 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.208989 kubelet[2852]: E0114 14:35:26.208991 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.375422 kubelet[2852]: W0114 14:35:26.374038 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.375422 kubelet[2852]: E0114 14:35:26.374084 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.397704 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2157117011.mount: Deactivated successfully. Jan 14 14:35:26.407255 kubelet[2852]: W0114 14:35:26.407216 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.407378 kubelet[2852]: E0114 14:35:26.407262 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.471590 containerd[1697]: time="2025-01-14T14:35:26.471532519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:26.473673 containerd[1697]: time="2025-01-14T14:35:26.473612991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jan 14 14:35:26.477229 containerd[1697]: time="2025-01-14T14:35:26.477186414Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:26.480775 containerd[1697]: time="2025-01-14T14:35:26.480743037Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:26.484442 containerd[1697]: time="2025-01-14T14:35:26.484391663Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 14:35:26.487321 containerd[1697]: time="2025-01-14T14:35:26.487283263Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:26.490668 containerd[1697]: time="2025-01-14T14:35:26.490373270Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 14 14:35:26.497819 containerd[1697]: time="2025-01-14T14:35:26.497790726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 14:35:26.498562 containerd[1697]: time="2025-01-14T14:35:26.498528051Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 674.552654ms" Jan 14 14:35:26.500089 containerd[1697]: time="2025-01-14T14:35:26.500054804Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.591524ms" Jan 14 14:35:26.505612 containerd[1697]: time="2025-01-14T14:35:26.505574194Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 677.667476ms" Jan 14 14:35:26.773748 kubelet[2852]: E0114 14:35:26.773591 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-0bb245c6fa?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="1.6s" Jan 14 14:35:26.854067 kubelet[2852]: W0114 14:35:26.853987 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-0bb245c6fa&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.854067 kubelet[2852]: E0114 14:35:26.854071 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-0bb245c6fa&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:26.877327 kubelet[2852]: I0114 14:35:26.876800 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:26.877327 kubelet[2852]: E0114 14:35:26.877157 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:26.919513 containerd[1697]: time="2025-01-14T14:35:26.918700360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:26.919513 containerd[1697]: time="2025-01-14T14:35:26.918772262Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:26.919513 containerd[1697]: time="2025-01-14T14:35:26.918808063Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:26.919513 containerd[1697]: time="2025-01-14T14:35:26.918937068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:26.925499 containerd[1697]: time="2025-01-14T14:35:26.924094346Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:26.925499 containerd[1697]: time="2025-01-14T14:35:26.924164848Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:26.925499 containerd[1697]: time="2025-01-14T14:35:26.924209050Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:26.925499 containerd[1697]: time="2025-01-14T14:35:26.924367355Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:26.930631 containerd[1697]: time="2025-01-14T14:35:26.930378563Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:26.930631 containerd[1697]: time="2025-01-14T14:35:26.930442765Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:26.930631 containerd[1697]: time="2025-01-14T14:35:26.930475866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:26.930631 containerd[1697]: time="2025-01-14T14:35:26.930559769Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:26.956727 systemd[1]: Started cri-containerd-79a91cc71681bc12b075bd6ef356899cef9fd9db39e118b95a5d4d28a7064408.scope - libcontainer container 79a91cc71681bc12b075bd6ef356899cef9fd9db39e118b95a5d4d28a7064408. Jan 14 14:35:26.962419 systemd[1]: Started cri-containerd-6b7f7c153bd98e3e4035500f0de036a9df413f800bd714071602da8674b03227.scope - libcontainer container 6b7f7c153bd98e3e4035500f0de036a9df413f800bd714071602da8674b03227. Jan 14 14:35:26.964947 systemd[1]: Started cri-containerd-81aa62bc0c8e5eec922491d116ad74742360c38af1b9389188765d1f4027bbec.scope - libcontainer container 81aa62bc0c8e5eec922491d116ad74742360c38af1b9389188765d1f4027bbec. Jan 14 14:35:27.038588 containerd[1697]: time="2025-01-14T14:35:27.038327290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.3.0-a-0bb245c6fa,Uid:0b359362fd9859b1748a22e074816a11,Namespace:kube-system,Attempt:0,} returns sandbox id \"81aa62bc0c8e5eec922491d116ad74742360c38af1b9389188765d1f4027bbec\"" Jan 14 14:35:27.047001 containerd[1697]: time="2025-01-14T14:35:27.046582675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.3.0-a-0bb245c6fa,Uid:186592f8b856d6425267503d411330a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"79a91cc71681bc12b075bd6ef356899cef9fd9db39e118b95a5d4d28a7064408\"" Jan 14 14:35:27.056504 containerd[1697]: time="2025-01-14T14:35:27.055125270Z" level=info msg="CreateContainer within sandbox \"81aa62bc0c8e5eec922491d116ad74742360c38af1b9389188765d1f4027bbec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 14:35:27.063292 containerd[1697]: time="2025-01-14T14:35:27.063254151Z" level=info msg="CreateContainer within sandbox \"79a91cc71681bc12b075bd6ef356899cef9fd9db39e118b95a5d4d28a7064408\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 14:35:27.070003 containerd[1697]: time="2025-01-14T14:35:27.069965183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.3.0-a-0bb245c6fa,Uid:f276a7c464bae4ba9f2d935671260d21,Namespace:kube-system,Attempt:0,} returns sandbox id \"6b7f7c153bd98e3e4035500f0de036a9df413f800bd714071602da8674b03227\"" Jan 14 14:35:27.073201 containerd[1697]: time="2025-01-14T14:35:27.073161993Z" level=info msg="CreateContainer within sandbox \"6b7f7c153bd98e3e4035500f0de036a9df413f800bd714071602da8674b03227\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 14:35:27.433322 kubelet[2852]: E0114 14:35:27.433277 2852 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.34:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.213461 kubelet[2852]: W0114 14:35:28.213361 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.213461 kubelet[2852]: W0114 14:35:28.213361 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.213727 kubelet[2852]: E0114 14:35:28.213515 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.34:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.213727 kubelet[2852]: E0114 14:35:28.213555 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.34:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.374223 kubelet[2852]: E0114 14:35:28.374165 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-0bb245c6fa?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="3.2s" Jan 14 14:35:28.480296 kubelet[2852]: I0114 14:35:28.479813 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:28.480757 kubelet[2852]: E0114 14:35:28.480447 2852 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.34:6443/api/v1/nodes\": dial tcp 10.200.8.34:6443: connect: connection refused" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:28.542351 kubelet[2852]: W0114 14:35:28.542273 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.542351 kubelet[2852]: E0114 14:35:28.542355 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.34:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.819627 kubelet[2852]: W0114 14:35:28.819546 2852 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-0bb245c6fa&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:28.819627 kubelet[2852]: E0114 14:35:28.819634 2852 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.34:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.3.0-a-0bb245c6fa&limit=500&resourceVersion=0": dial tcp 10.200.8.34:6443: connect: connection refused Jan 14 14:35:31.401901 containerd[1697]: time="2025-01-14T14:35:31.401769560Z" level=info msg="CreateContainer within sandbox \"81aa62bc0c8e5eec922491d116ad74742360c38af1b9389188765d1f4027bbec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"62f7c487affe85a0b8396f6e618c2b971f554f0571009c48785b14d43dcb6208\"" Jan 14 14:35:31.402945 containerd[1697]: time="2025-01-14T14:35:31.402905900Z" level=info msg="StartContainer for \"62f7c487affe85a0b8396f6e618c2b971f554f0571009c48785b14d43dcb6208\"" Jan 14 14:35:31.413402 containerd[1697]: time="2025-01-14T14:35:31.413263157Z" level=info msg="CreateContainer within sandbox \"6b7f7c153bd98e3e4035500f0de036a9df413f800bd714071602da8674b03227\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"eaab0e212cb5aeb51d2f8e9256792f918027c045919b3d929faf533cfa43e3bd\"" Jan 14 14:35:31.413888 containerd[1697]: time="2025-01-14T14:35:31.413805776Z" level=info msg="StartContainer for \"eaab0e212cb5aeb51d2f8e9256792f918027c045919b3d929faf533cfa43e3bd\"" Jan 14 14:35:31.421715 containerd[1697]: time="2025-01-14T14:35:31.421677948Z" level=info msg="CreateContainer within sandbox \"79a91cc71681bc12b075bd6ef356899cef9fd9db39e118b95a5d4d28a7064408\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4492225a6dccdd05b4d748aa64fdd18c11b64bbb9d3b3e3be3ddd814b4b8c718\"" Jan 14 14:35:31.425691 containerd[1697]: time="2025-01-14T14:35:31.425661585Z" level=info msg="StartContainer for \"4492225a6dccdd05b4d748aa64fdd18c11b64bbb9d3b3e3be3ddd814b4b8c718\"" Jan 14 14:35:31.456647 systemd[1]: Started cri-containerd-62f7c487affe85a0b8396f6e618c2b971f554f0571009c48785b14d43dcb6208.scope - libcontainer container 62f7c487affe85a0b8396f6e618c2b971f554f0571009c48785b14d43dcb6208. Jan 14 14:35:31.481675 systemd[1]: Started cri-containerd-eaab0e212cb5aeb51d2f8e9256792f918027c045919b3d929faf533cfa43e3bd.scope - libcontainer container eaab0e212cb5aeb51d2f8e9256792f918027c045919b3d929faf533cfa43e3bd. Jan 14 14:35:31.489652 systemd[1]: Started cri-containerd-4492225a6dccdd05b4d748aa64fdd18c11b64bbb9d3b3e3be3ddd814b4b8c718.scope - libcontainer container 4492225a6dccdd05b4d748aa64fdd18c11b64bbb9d3b3e3be3ddd814b4b8c718. Jan 14 14:35:31.571495 containerd[1697]: time="2025-01-14T14:35:31.569911866Z" level=info msg="StartContainer for \"62f7c487affe85a0b8396f6e618c2b971f554f0571009c48785b14d43dcb6208\" returns successfully" Jan 14 14:35:31.571495 containerd[1697]: time="2025-01-14T14:35:31.570074972Z" level=info msg="StartContainer for \"4492225a6dccdd05b4d748aa64fdd18c11b64bbb9d3b3e3be3ddd814b4b8c718\" returns successfully" Jan 14 14:35:31.575043 kubelet[2852]: E0114 14:35:31.574997 2852 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.34:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.3.0-a-0bb245c6fa?timeout=10s\": dial tcp 10.200.8.34:6443: connect: connection refused" interval="6.4s" Jan 14 14:35:31.581318 containerd[1697]: time="2025-01-14T14:35:31.581279359Z" level=info msg="StartContainer for \"eaab0e212cb5aeb51d2f8e9256792f918027c045919b3d929faf533cfa43e3bd\" returns successfully" Jan 14 14:35:31.684080 kubelet[2852]: I0114 14:35:31.683971 2852 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:33.597557 kubelet[2852]: I0114 14:35:33.596524 2852 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:34.349716 kubelet[2852]: I0114 14:35:34.349681 2852 apiserver.go:52] "Watching apiserver" Jan 14 14:35:34.368831 kubelet[2852]: I0114 14:35:34.368779 2852 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 14:35:34.442731 kubelet[2852]: W0114 14:35:34.441950 2852 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:35.373778 systemd[1]: Reloading requested from client PID 3131 ('systemctl') (unit session-9.scope)... Jan 14 14:35:35.373792 systemd[1]: Reloading... Jan 14 14:35:35.515492 zram_generator::config[3179]: No configuration found. Jan 14 14:35:35.625008 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 14 14:35:35.717890 systemd[1]: Reloading finished in 343 ms. Jan 14 14:35:35.761062 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:35.778973 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 14:35:35.779237 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:35.785893 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 14:35:35.890304 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 14:35:35.901858 (kubelet)[3240]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 14:35:35.941352 kubelet[3240]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:35:35.941352 kubelet[3240]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 14 14:35:35.941352 kubelet[3240]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 14:35:35.942831 kubelet[3240]: I0114 14:35:35.941377 3240 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 14:35:35.948134 kubelet[3240]: I0114 14:35:35.948105 3240 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 14 14:35:35.948134 kubelet[3240]: I0114 14:35:35.948127 3240 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 14:35:35.948360 kubelet[3240]: I0114 14:35:35.948340 3240 server.go:927] "Client rotation is on, will bootstrap in background" Jan 14 14:35:35.949544 kubelet[3240]: I0114 14:35:35.949521 3240 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 14:35:35.950650 kubelet[3240]: I0114 14:35:35.950505 3240 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 14:35:35.956796 kubelet[3240]: I0114 14:35:35.956246 3240 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 14:35:35.956796 kubelet[3240]: I0114 14:35:35.956450 3240 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 14:35:35.956796 kubelet[3240]: I0114 14:35:35.956488 3240 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.3.0-a-0bb245c6fa","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 14 14:35:35.956796 kubelet[3240]: I0114 14:35:35.956640 3240 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 14:35:35.957385 kubelet[3240]: I0114 14:35:35.956649 3240 container_manager_linux.go:301] "Creating device plugin manager" Jan 14 14:35:35.957385 kubelet[3240]: I0114 14:35:35.956691 3240 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:35:35.957385 kubelet[3240]: I0114 14:35:35.956780 3240 kubelet.go:400] "Attempting to sync node with API server" Jan 14 14:35:35.957385 kubelet[3240]: I0114 14:35:35.956798 3240 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 14:35:35.957385 kubelet[3240]: I0114 14:35:35.956825 3240 kubelet.go:312] "Adding apiserver pod source" Jan 14 14:35:35.957385 kubelet[3240]: I0114 14:35:35.956856 3240 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 14:35:35.958794 kubelet[3240]: I0114 14:35:35.958758 3240 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 14 14:35:35.958958 kubelet[3240]: I0114 14:35:35.958942 3240 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 14:35:35.959422 kubelet[3240]: I0114 14:35:35.959402 3240 server.go:1264] "Started kubelet" Jan 14 14:35:35.964892 kubelet[3240]: I0114 14:35:35.964871 3240 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 14:35:35.978008 kubelet[3240]: I0114 14:35:35.975060 3240 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 14:35:35.988539 kubelet[3240]: I0114 14:35:35.988523 3240 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 14 14:35:35.992219 kubelet[3240]: I0114 14:35:35.992198 3240 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 14 14:35:35.992400 kubelet[3240]: I0114 14:35:35.992382 3240 reconciler.go:26] "Reconciler: start to sync state" Jan 14 14:35:35.997626 kubelet[3240]: I0114 14:35:35.996997 3240 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 14:35:35.997626 kubelet[3240]: I0114 14:35:35.997329 3240 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 14:35:35.998141 kubelet[3240]: I0114 14:35:35.998122 3240 server.go:455] "Adding debug handlers to kubelet server" Jan 14 14:35:36.001235 kubelet[3240]: I0114 14:35:36.001210 3240 factory.go:221] Registration of the systemd container factory successfully Jan 14 14:35:36.001313 kubelet[3240]: I0114 14:35:36.001299 3240 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 14:35:36.003215 kubelet[3240]: I0114 14:35:36.003180 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 14:35:36.004867 kubelet[3240]: E0114 14:35:36.004843 3240 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 14:35:36.005269 kubelet[3240]: I0114 14:35:36.005250 3240 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 14:35:36.005387 kubelet[3240]: I0114 14:35:36.005374 3240 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 14 14:35:36.005574 kubelet[3240]: I0114 14:35:36.005560 3240 kubelet.go:2337] "Starting kubelet main sync loop" Jan 14 14:35:36.005715 kubelet[3240]: E0114 14:35:36.005695 3240 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 14:35:36.010362 kubelet[3240]: I0114 14:35:36.010342 3240 factory.go:221] Registration of the containerd container factory successfully Jan 14 14:35:36.059695 kubelet[3240]: I0114 14:35:36.059662 3240 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 14 14:35:36.059862 kubelet[3240]: I0114 14:35:36.059711 3240 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 14 14:35:36.059862 kubelet[3240]: I0114 14:35:36.059735 3240 state_mem.go:36] "Initialized new in-memory state store" Jan 14 14:35:36.060020 kubelet[3240]: I0114 14:35:36.059907 3240 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 14:35:36.060020 kubelet[3240]: I0114 14:35:36.059920 3240 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 14:35:36.060020 kubelet[3240]: I0114 14:35:36.059942 3240 policy_none.go:49] "None policy: Start" Jan 14 14:35:36.060851 kubelet[3240]: I0114 14:35:36.060816 3240 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 14 14:35:36.060851 kubelet[3240]: I0114 14:35:36.060841 3240 state_mem.go:35] "Initializing new in-memory state store" Jan 14 14:35:36.061064 kubelet[3240]: I0114 14:35:36.061043 3240 state_mem.go:75] "Updated machine memory state" Jan 14 14:35:36.065401 kubelet[3240]: I0114 14:35:36.064985 3240 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 14:35:36.065401 kubelet[3240]: I0114 14:35:36.065191 3240 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 14:35:36.065401 kubelet[3240]: I0114 14:35:36.065295 3240 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 14:35:36.091961 kubelet[3240]: I0114 14:35:36.091934 3240 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.101742 kubelet[3240]: I0114 14:35:36.101533 3240 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.101742 kubelet[3240]: I0114 14:35:36.101617 3240 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.107844 kubelet[3240]: I0114 14:35:36.106813 3240 topology_manager.go:215] "Topology Admit Handler" podUID="0b359362fd9859b1748a22e074816a11" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.107844 kubelet[3240]: I0114 14:35:36.106930 3240 topology_manager.go:215] "Topology Admit Handler" podUID="f276a7c464bae4ba9f2d935671260d21" podNamespace="kube-system" podName="kube-scheduler-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.107844 kubelet[3240]: I0114 14:35:36.107000 3240 topology_manager.go:215] "Topology Admit Handler" podUID="186592f8b856d6425267503d411330a8" podNamespace="kube-system" podName="kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.116433 kubelet[3240]: W0114 14:35:36.116394 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:36.116626 kubelet[3240]: W0114 14:35:36.116565 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:36.117506 kubelet[3240]: W0114 14:35:36.117436 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:36.117601 kubelet[3240]: E0114 14:35:36.117539 3240 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294191 kubelet[3240]: I0114 14:35:36.293640 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-k8s-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294191 kubelet[3240]: I0114 14:35:36.293699 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-kubeconfig\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294191 kubelet[3240]: I0114 14:35:36.293734 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f276a7c464bae4ba9f2d935671260d21-kubeconfig\") pod \"kube-scheduler-ci-4081.3.0-a-0bb245c6fa\" (UID: \"f276a7c464bae4ba9f2d935671260d21\") " pod="kube-system/kube-scheduler-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294191 kubelet[3240]: I0114 14:35:36.293763 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-ca-certs\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294191 kubelet[3240]: I0114 14:35:36.293796 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294771 kubelet[3240]: I0114 14:35:36.293829 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/186592f8b856d6425267503d411330a8-ca-certs\") pod \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" (UID: \"186592f8b856d6425267503d411330a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294771 kubelet[3240]: I0114 14:35:36.293857 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/186592f8b856d6425267503d411330a8-k8s-certs\") pod \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" (UID: \"186592f8b856d6425267503d411330a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294771 kubelet[3240]: I0114 14:35:36.293886 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/186592f8b856d6425267503d411330a8-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" (UID: \"186592f8b856d6425267503d411330a8\") " pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.294771 kubelet[3240]: I0114 14:35:36.293941 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/0b359362fd9859b1748a22e074816a11-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.3.0-a-0bb245c6fa\" (UID: \"0b359362fd9859b1748a22e074816a11\") " pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:36.957756 kubelet[3240]: I0114 14:35:36.957714 3240 apiserver.go:52] "Watching apiserver" Jan 14 14:35:36.992728 kubelet[3240]: I0114 14:35:36.992695 3240 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 14 14:35:37.046607 kubelet[3240]: W0114 14:35:37.046577 3240 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jan 14 14:35:37.047427 kubelet[3240]: E0114 14:35:37.046829 3240 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.3.0-a-0bb245c6fa\" already exists" pod="kube-system/kube-apiserver-ci-4081.3.0-a-0bb245c6fa" Jan 14 14:35:37.065304 kubelet[3240]: I0114 14:35:37.065055 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.3.0-a-0bb245c6fa" podStartSLOduration=1.065032899 podStartE2EDuration="1.065032899s" podCreationTimestamp="2025-01-14 14:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:37.064791393 +0000 UTC m=+1.159188258" watchObservedRunningTime="2025-01-14 14:35:37.065032899 +0000 UTC m=+1.159429664" Jan 14 14:35:37.073610 kubelet[3240]: I0114 14:35:37.073244 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.3.0-a-0bb245c6fa" podStartSLOduration=1.073224401 podStartE2EDuration="1.073224401s" podCreationTimestamp="2025-01-14 14:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:37.0731729 +0000 UTC m=+1.167569665" watchObservedRunningTime="2025-01-14 14:35:37.073224401 +0000 UTC m=+1.167621266" Jan 14 14:35:41.538287 sudo[2217]: pam_unix(sudo:session): session closed for user root Jan 14 14:35:41.644794 sshd[2214]: pam_unix(sshd:session): session closed for user core Jan 14 14:35:41.648030 systemd[1]: sshd@6-10.200.8.34:22-10.200.16.10:50090.service: Deactivated successfully. Jan 14 14:35:41.650335 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 14:35:41.650612 systemd[1]: session-9.scope: Consumed 4.895s CPU time, 190.4M memory peak, 0B memory swap peak. Jan 14 14:35:41.652071 systemd-logind[1672]: Session 9 logged out. Waiting for processes to exit. Jan 14 14:35:41.653216 systemd-logind[1672]: Removed session 9. Jan 14 14:35:52.462541 kubelet[3240]: I0114 14:35:52.462353 3240 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 14:35:52.463703 kubelet[3240]: I0114 14:35:52.463261 3240 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 14:35:52.463782 containerd[1697]: time="2025-01-14T14:35:52.462936667Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 14:35:53.289721 kubelet[3240]: I0114 14:35:53.289670 3240 topology_manager.go:215] "Topology Admit Handler" podUID="cbc8a679-f8da-4d0f-a188-49d60e79760d" podNamespace="kube-system" podName="kube-proxy-qjfks" Jan 14 14:35:53.304266 systemd[1]: Created slice kubepods-besteffort-podcbc8a679_f8da_4d0f_a188_49d60e79760d.slice - libcontainer container kubepods-besteffort-podcbc8a679_f8da_4d0f_a188_49d60e79760d.slice. Jan 14 14:35:53.396924 kubelet[3240]: I0114 14:35:53.394975 3240 topology_manager.go:215] "Topology Admit Handler" podUID="d563c1ac-a9af-4a75-acc0-8a74a2d3bafe" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-8p898" Jan 14 14:35:53.396924 kubelet[3240]: I0114 14:35:53.396666 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cbc8a679-f8da-4d0f-a188-49d60e79760d-xtables-lock\") pod \"kube-proxy-qjfks\" (UID: \"cbc8a679-f8da-4d0f-a188-49d60e79760d\") " pod="kube-system/kube-proxy-qjfks" Jan 14 14:35:53.396924 kubelet[3240]: I0114 14:35:53.396705 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sj965\" (UniqueName: \"kubernetes.io/projected/cbc8a679-f8da-4d0f-a188-49d60e79760d-kube-api-access-sj965\") pod \"kube-proxy-qjfks\" (UID: \"cbc8a679-f8da-4d0f-a188-49d60e79760d\") " pod="kube-system/kube-proxy-qjfks" Jan 14 14:35:53.396924 kubelet[3240]: I0114 14:35:53.396733 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cbc8a679-f8da-4d0f-a188-49d60e79760d-lib-modules\") pod \"kube-proxy-qjfks\" (UID: \"cbc8a679-f8da-4d0f-a188-49d60e79760d\") " pod="kube-system/kube-proxy-qjfks" Jan 14 14:35:53.396924 kubelet[3240]: I0114 14:35:53.396755 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/cbc8a679-f8da-4d0f-a188-49d60e79760d-kube-proxy\") pod \"kube-proxy-qjfks\" (UID: \"cbc8a679-f8da-4d0f-a188-49d60e79760d\") " pod="kube-system/kube-proxy-qjfks" Jan 14 14:35:53.396924 kubelet[3240]: I0114 14:35:53.396777 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d563c1ac-a9af-4a75-acc0-8a74a2d3bafe-var-lib-calico\") pod \"tigera-operator-7bc55997bb-8p898\" (UID: \"d563c1ac-a9af-4a75-acc0-8a74a2d3bafe\") " pod="tigera-operator/tigera-operator-7bc55997bb-8p898" Jan 14 14:35:53.397327 kubelet[3240]: I0114 14:35:53.396801 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkggf\" (UniqueName: \"kubernetes.io/projected/d563c1ac-a9af-4a75-acc0-8a74a2d3bafe-kube-api-access-xkggf\") pod \"tigera-operator-7bc55997bb-8p898\" (UID: \"d563c1ac-a9af-4a75-acc0-8a74a2d3bafe\") " pod="tigera-operator/tigera-operator-7bc55997bb-8p898" Jan 14 14:35:53.407977 systemd[1]: Created slice kubepods-besteffort-podd563c1ac_a9af_4a75_acc0_8a74a2d3bafe.slice - libcontainer container kubepods-besteffort-podd563c1ac_a9af_4a75_acc0_8a74a2d3bafe.slice. Jan 14 14:35:53.614183 containerd[1697]: time="2025-01-14T14:35:53.613605256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjfks,Uid:cbc8a679-f8da-4d0f-a188-49d60e79760d,Namespace:kube-system,Attempt:0,}" Jan 14 14:35:53.667262 containerd[1697]: time="2025-01-14T14:35:53.667133811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:53.667507 containerd[1697]: time="2025-01-14T14:35:53.667279316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:53.667924 containerd[1697]: time="2025-01-14T14:35:53.667797832Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:53.668089 containerd[1697]: time="2025-01-14T14:35:53.668018839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:53.691968 systemd[1]: Started cri-containerd-b8eeb5b6c59df5d126d42187e0f7248bdacb1cad6c46d8312e0f3bc091f80ca2.scope - libcontainer container b8eeb5b6c59df5d126d42187e0f7248bdacb1cad6c46d8312e0f3bc091f80ca2. Jan 14 14:35:53.715625 containerd[1697]: time="2025-01-14T14:35:53.715349503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8p898,Uid:d563c1ac-a9af-4a75-acc0-8a74a2d3bafe,Namespace:tigera-operator,Attempt:0,}" Jan 14 14:35:53.717187 containerd[1697]: time="2025-01-14T14:35:53.717081256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qjfks,Uid:cbc8a679-f8da-4d0f-a188-49d60e79760d,Namespace:kube-system,Attempt:0,} returns sandbox id \"b8eeb5b6c59df5d126d42187e0f7248bdacb1cad6c46d8312e0f3bc091f80ca2\"" Jan 14 14:35:53.721751 containerd[1697]: time="2025-01-14T14:35:53.721583195Z" level=info msg="CreateContainer within sandbox \"b8eeb5b6c59df5d126d42187e0f7248bdacb1cad6c46d8312e0f3bc091f80ca2\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 14:35:53.774947 containerd[1697]: time="2025-01-14T14:35:53.774887844Z" level=info msg="CreateContainer within sandbox \"b8eeb5b6c59df5d126d42187e0f7248bdacb1cad6c46d8312e0f3bc091f80ca2\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"668beb67568ae96e6a59565a862b914d38356c18d7571d18ed332abf1365e6f8\"" Jan 14 14:35:53.777493 containerd[1697]: time="2025-01-14T14:35:53.777212616Z" level=info msg="StartContainer for \"668beb67568ae96e6a59565a862b914d38356c18d7571d18ed332abf1365e6f8\"" Jan 14 14:35:53.782815 containerd[1697]: time="2025-01-14T14:35:53.782723486Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:35:53.783671 containerd[1697]: time="2025-01-14T14:35:53.783416908Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:35:53.783870 containerd[1697]: time="2025-01-14T14:35:53.783504011Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:53.783870 containerd[1697]: time="2025-01-14T14:35:53.783618714Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:35:53.807224 systemd[1]: Started cri-containerd-029596f0418d0b9ee334f08bb9a141c04bc4baa1d4c4f334661e96e7dfe7db8d.scope - libcontainer container 029596f0418d0b9ee334f08bb9a141c04bc4baa1d4c4f334661e96e7dfe7db8d. Jan 14 14:35:53.815177 systemd[1]: Started cri-containerd-668beb67568ae96e6a59565a862b914d38356c18d7571d18ed332abf1365e6f8.scope - libcontainer container 668beb67568ae96e6a59565a862b914d38356c18d7571d18ed332abf1365e6f8. Jan 14 14:35:53.864589 containerd[1697]: time="2025-01-14T14:35:53.862492954Z" level=info msg="StartContainer for \"668beb67568ae96e6a59565a862b914d38356c18d7571d18ed332abf1365e6f8\" returns successfully" Jan 14 14:35:53.880407 containerd[1697]: time="2025-01-14T14:35:53.880355506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-8p898,Uid:d563c1ac-a9af-4a75-acc0-8a74a2d3bafe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"029596f0418d0b9ee334f08bb9a141c04bc4baa1d4c4f334661e96e7dfe7db8d\"" Jan 14 14:35:53.882757 containerd[1697]: time="2025-01-14T14:35:53.882724879Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 14 14:35:54.094187 kubelet[3240]: I0114 14:35:54.094122 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qjfks" podStartSLOduration=1.094100517 podStartE2EDuration="1.094100517s" podCreationTimestamp="2025-01-14 14:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:35:54.093992314 +0000 UTC m=+18.188389079" watchObservedRunningTime="2025-01-14 14:35:54.094100517 +0000 UTC m=+18.188497282" Jan 14 14:35:54.516109 systemd[1]: run-containerd-runc-k8s.io-b8eeb5b6c59df5d126d42187e0f7248bdacb1cad6c46d8312e0f3bc091f80ca2-runc.sGoheF.mount: Deactivated successfully. Jan 14 14:35:56.268585 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount350379111.mount: Deactivated successfully. Jan 14 14:35:56.830535 containerd[1697]: time="2025-01-14T14:35:56.830479550Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:56.832884 containerd[1697]: time="2025-01-14T14:35:56.832684518Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764337" Jan 14 14:35:56.835765 containerd[1697]: time="2025-01-14T14:35:56.835728212Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:56.844839 containerd[1697]: time="2025-01-14T14:35:56.844792793Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:35:56.845628 containerd[1697]: time="2025-01-14T14:35:56.845506115Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 2.962566829s" Jan 14 14:35:56.845628 containerd[1697]: time="2025-01-14T14:35:56.845545216Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 14 14:35:56.847912 containerd[1697]: time="2025-01-14T14:35:56.847789486Z" level=info msg="CreateContainer within sandbox \"029596f0418d0b9ee334f08bb9a141c04bc4baa1d4c4f334661e96e7dfe7db8d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 14:35:56.899821 containerd[1697]: time="2025-01-14T14:35:56.899734892Z" level=info msg="CreateContainer within sandbox \"029596f0418d0b9ee334f08bb9a141c04bc4baa1d4c4f334661e96e7dfe7db8d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"f3a16c6d495435bdd5fd63617ebfeda0e84f7d4aa66adc53df0be9a8ee8c51a6\"" Jan 14 14:35:56.901202 containerd[1697]: time="2025-01-14T14:35:56.900297410Z" level=info msg="StartContainer for \"f3a16c6d495435bdd5fd63617ebfeda0e84f7d4aa66adc53df0be9a8ee8c51a6\"" Jan 14 14:35:56.931712 systemd[1]: Started cri-containerd-f3a16c6d495435bdd5fd63617ebfeda0e84f7d4aa66adc53df0be9a8ee8c51a6.scope - libcontainer container f3a16c6d495435bdd5fd63617ebfeda0e84f7d4aa66adc53df0be9a8ee8c51a6. Jan 14 14:35:56.959446 containerd[1697]: time="2025-01-14T14:35:56.959404138Z" level=info msg="StartContainer for \"f3a16c6d495435bdd5fd63617ebfeda0e84f7d4aa66adc53df0be9a8ee8c51a6\" returns successfully" Jan 14 14:35:57.097675 kubelet[3240]: I0114 14:35:57.097278 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-8p898" podStartSLOduration=1.132743212 podStartE2EDuration="4.097256501s" podCreationTimestamp="2025-01-14 14:35:53 +0000 UTC" firstStartedPulling="2025-01-14 14:35:53.881868653 +0000 UTC m=+17.976265418" lastFinishedPulling="2025-01-14 14:35:56.846381942 +0000 UTC m=+20.940778707" observedRunningTime="2025-01-14 14:35:57.097047395 +0000 UTC m=+21.191444160" watchObservedRunningTime="2025-01-14 14:35:57.097256501 +0000 UTC m=+21.191653766" Jan 14 14:36:00.031956 kubelet[3240]: I0114 14:36:00.031757 3240 topology_manager.go:215] "Topology Admit Handler" podUID="433a7798-027a-4867-a879-370c49ea7598" podNamespace="calico-system" podName="calico-typha-549b88fb84-hxhl7" Jan 14 14:36:00.048850 systemd[1]: Created slice kubepods-besteffort-pod433a7798_027a_4867_a879_370c49ea7598.slice - libcontainer container kubepods-besteffort-pod433a7798_027a_4867_a879_370c49ea7598.slice. Jan 14 14:36:00.053735 kubelet[3240]: W0114 14:36:00.052692 3240 reflector.go:547] object-"calico-system"/"typha-certs": failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.0-a-0bb245c6fa" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-0bb245c6fa' and this object Jan 14 14:36:00.053735 kubelet[3240]: E0114 14:36:00.052744 3240 reflector.go:150] object-"calico-system"/"typha-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "typha-certs" is forbidden: User "system:node:ci-4081.3.0-a-0bb245c6fa" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ci-4081.3.0-a-0bb245c6fa' and this object Jan 14 14:36:00.129777 kubelet[3240]: I0114 14:36:00.129198 3240 topology_manager.go:215] "Topology Admit Handler" podUID="f82a5f35-7b8e-4fd0-9f3d-4360cda44855" podNamespace="calico-system" podName="calico-node-mf8qg" Jan 14 14:36:00.137709 kubelet[3240]: I0114 14:36:00.137676 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87xc2\" (UniqueName: \"kubernetes.io/projected/433a7798-027a-4867-a879-370c49ea7598-kube-api-access-87xc2\") pod \"calico-typha-549b88fb84-hxhl7\" (UID: \"433a7798-027a-4867-a879-370c49ea7598\") " pod="calico-system/calico-typha-549b88fb84-hxhl7" Jan 14 14:36:00.137870 kubelet[3240]: I0114 14:36:00.137721 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/433a7798-027a-4867-a879-370c49ea7598-tigera-ca-bundle\") pod \"calico-typha-549b88fb84-hxhl7\" (UID: \"433a7798-027a-4867-a879-370c49ea7598\") " pod="calico-system/calico-typha-549b88fb84-hxhl7" Jan 14 14:36:00.137870 kubelet[3240]: I0114 14:36:00.137748 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/433a7798-027a-4867-a879-370c49ea7598-typha-certs\") pod \"calico-typha-549b88fb84-hxhl7\" (UID: \"433a7798-027a-4867-a879-370c49ea7598\") " pod="calico-system/calico-typha-549b88fb84-hxhl7" Jan 14 14:36:00.144182 systemd[1]: Created slice kubepods-besteffort-podf82a5f35_7b8e_4fd0_9f3d_4360cda44855.slice - libcontainer container kubepods-besteffort-podf82a5f35_7b8e_4fd0_9f3d_4360cda44855.slice. Jan 14 14:36:00.238687 kubelet[3240]: I0114 14:36:00.238632 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-node-certs\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.238687 kubelet[3240]: I0114 14:36:00.238693 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-var-run-calico\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.238915 kubelet[3240]: I0114 14:36:00.238719 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxs4g\" (UniqueName: \"kubernetes.io/projected/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-kube-api-access-kxs4g\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.238915 kubelet[3240]: I0114 14:36:00.238764 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-tigera-ca-bundle\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.238915 kubelet[3240]: I0114 14:36:00.238800 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-policysync\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.238915 kubelet[3240]: I0114 14:36:00.238819 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-cni-log-dir\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.238915 kubelet[3240]: I0114 14:36:00.238840 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-cni-bin-dir\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.239126 kubelet[3240]: I0114 14:36:00.238873 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-lib-modules\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.239126 kubelet[3240]: I0114 14:36:00.238896 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-xtables-lock\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.239126 kubelet[3240]: I0114 14:36:00.238920 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-flexvol-driver-host\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.239126 kubelet[3240]: I0114 14:36:00.238945 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-cni-net-dir\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.239126 kubelet[3240]: I0114 14:36:00.238968 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f82a5f35-7b8e-4fd0-9f3d-4360cda44855-var-lib-calico\") pod \"calico-node-mf8qg\" (UID: \"f82a5f35-7b8e-4fd0-9f3d-4360cda44855\") " pod="calico-system/calico-node-mf8qg" Jan 14 14:36:00.340344 kubelet[3240]: I0114 14:36:00.338440 3240 topology_manager.go:215] "Topology Admit Handler" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" podNamespace="calico-system" podName="csi-node-driver-jckhs" Jan 14 14:36:00.340344 kubelet[3240]: E0114 14:36:00.338915 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:00.384649 kubelet[3240]: E0114 14:36:00.384608 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.384852 kubelet[3240]: W0114 14:36:00.384833 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.384969 kubelet[3240]: E0114 14:36:00.384954 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.443651 kubelet[3240]: E0114 14:36:00.443527 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.444458 kubelet[3240]: W0114 14:36:00.444319 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.444458 kubelet[3240]: E0114 14:36:00.444361 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.444790 kubelet[3240]: E0114 14:36:00.444772 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.444790 kubelet[3240]: W0114 14:36:00.444787 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.444999 kubelet[3240]: E0114 14:36:00.444826 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.444999 kubelet[3240]: I0114 14:36:00.444886 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/09a9e4f5-6d4c-44f9-814c-0e031fb006c1-varrun\") pod \"csi-node-driver-jckhs\" (UID: \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\") " pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:00.445208 kubelet[3240]: E0114 14:36:00.445185 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.445308 kubelet[3240]: W0114 14:36:00.445207 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.445308 kubelet[3240]: E0114 14:36:00.445232 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.445308 kubelet[3240]: I0114 14:36:00.445258 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnjqr\" (UniqueName: \"kubernetes.io/projected/09a9e4f5-6d4c-44f9-814c-0e031fb006c1-kube-api-access-vnjqr\") pod \"csi-node-driver-jckhs\" (UID: \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\") " pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:00.445583 kubelet[3240]: E0114 14:36:00.445537 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.445583 kubelet[3240]: W0114 14:36:00.445551 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.445583 kubelet[3240]: E0114 14:36:00.445573 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.445824 kubelet[3240]: I0114 14:36:00.445598 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/09a9e4f5-6d4c-44f9-814c-0e031fb006c1-registration-dir\") pod \"csi-node-driver-jckhs\" (UID: \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\") " pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:00.445896 kubelet[3240]: E0114 14:36:00.445867 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.445944 kubelet[3240]: W0114 14:36:00.445894 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.445944 kubelet[3240]: E0114 14:36:00.445914 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.445944 kubelet[3240]: I0114 14:36:00.445939 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/09a9e4f5-6d4c-44f9-814c-0e031fb006c1-kubelet-dir\") pod \"csi-node-driver-jckhs\" (UID: \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\") " pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:00.446269 kubelet[3240]: E0114 14:36:00.446222 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.446269 kubelet[3240]: W0114 14:36:00.446236 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.446269 kubelet[3240]: E0114 14:36:00.446253 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.446490 kubelet[3240]: I0114 14:36:00.446275 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/09a9e4f5-6d4c-44f9-814c-0e031fb006c1-socket-dir\") pod \"csi-node-driver-jckhs\" (UID: \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\") " pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:00.446596 kubelet[3240]: E0114 14:36:00.446575 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.446596 kubelet[3240]: W0114 14:36:00.446592 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.446852 kubelet[3240]: E0114 14:36:00.446733 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.446852 kubelet[3240]: E0114 14:36:00.446834 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.446852 kubelet[3240]: W0114 14:36:00.446845 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.447003 kubelet[3240]: E0114 14:36:00.446948 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.447103 kubelet[3240]: E0114 14:36:00.447084 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.447103 kubelet[3240]: W0114 14:36:00.447097 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.447205 kubelet[3240]: E0114 14:36:00.447181 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.447372 kubelet[3240]: E0114 14:36:00.447353 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.447372 kubelet[3240]: W0114 14:36:00.447365 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.447570 kubelet[3240]: E0114 14:36:00.447501 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.447627 kubelet[3240]: E0114 14:36:00.447589 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.447627 kubelet[3240]: W0114 14:36:00.447606 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.447717 kubelet[3240]: E0114 14:36:00.447706 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.447849 kubelet[3240]: E0114 14:36:00.447830 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.447849 kubelet[3240]: W0114 14:36:00.447841 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.447979 kubelet[3240]: E0114 14:36:00.447853 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.448084 kubelet[3240]: E0114 14:36:00.448067 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.448084 kubelet[3240]: W0114 14:36:00.448079 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.448185 kubelet[3240]: E0114 14:36:00.448094 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.448311 kubelet[3240]: E0114 14:36:00.448292 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.448311 kubelet[3240]: W0114 14:36:00.448305 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.448439 kubelet[3240]: E0114 14:36:00.448317 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.448563 kubelet[3240]: E0114 14:36:00.448546 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.448563 kubelet[3240]: W0114 14:36:00.448559 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.448715 kubelet[3240]: E0114 14:36:00.448572 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.448802 kubelet[3240]: E0114 14:36:00.448783 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.448802 kubelet[3240]: W0114 14:36:00.448798 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.448873 kubelet[3240]: E0114 14:36:00.448811 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.451493 containerd[1697]: time="2025-01-14T14:36:00.451388702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mf8qg,Uid:f82a5f35-7b8e-4fd0-9f3d-4360cda44855,Namespace:calico-system,Attempt:0,}" Jan 14 14:36:00.494061 containerd[1697]: time="2025-01-14T14:36:00.493964993Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:00.494061 containerd[1697]: time="2025-01-14T14:36:00.494031695Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:00.494275 containerd[1697]: time="2025-01-14T14:36:00.494191800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:00.495034 containerd[1697]: time="2025-01-14T14:36:00.494862320Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:00.516644 systemd[1]: Started cri-containerd-4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4.scope - libcontainer container 4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4. Jan 14 14:36:00.547866 kubelet[3240]: E0114 14:36:00.547426 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.547866 kubelet[3240]: W0114 14:36:00.547540 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.547866 kubelet[3240]: E0114 14:36:00.547568 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.549192 kubelet[3240]: E0114 14:36:00.548935 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.549192 kubelet[3240]: W0114 14:36:00.548953 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.549192 kubelet[3240]: E0114 14:36:00.548991 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.549793 kubelet[3240]: E0114 14:36:00.549568 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.549793 kubelet[3240]: W0114 14:36:00.549582 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.549793 kubelet[3240]: E0114 14:36:00.549644 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.550147 kubelet[3240]: E0114 14:36:00.550129 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.550147 kubelet[3240]: W0114 14:36:00.550144 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.550275 kubelet[3240]: E0114 14:36:00.550248 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.551023 kubelet[3240]: E0114 14:36:00.550921 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.551023 kubelet[3240]: W0114 14:36:00.550938 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.551023 kubelet[3240]: E0114 14:36:00.550957 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.551721 kubelet[3240]: E0114 14:36:00.551238 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.551721 kubelet[3240]: W0114 14:36:00.551249 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.551721 kubelet[3240]: E0114 14:36:00.551351 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.551721 kubelet[3240]: E0114 14:36:00.551532 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.551721 kubelet[3240]: W0114 14:36:00.551543 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.551721 kubelet[3240]: E0114 14:36:00.551628 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.551994 kubelet[3240]: E0114 14:36:00.551768 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.551994 kubelet[3240]: W0114 14:36:00.551777 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.551994 kubelet[3240]: E0114 14:36:00.551793 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.551994 kubelet[3240]: E0114 14:36:00.551993 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.552149 kubelet[3240]: W0114 14:36:00.552002 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.552149 kubelet[3240]: E0114 14:36:00.552025 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.552976 kubelet[3240]: E0114 14:36:00.552407 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.552976 kubelet[3240]: W0114 14:36:00.552422 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.552976 kubelet[3240]: E0114 14:36:00.552514 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.552976 kubelet[3240]: E0114 14:36:00.552833 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.552976 kubelet[3240]: W0114 14:36:00.552844 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.552976 kubelet[3240]: E0114 14:36:00.552930 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.553976 kubelet[3240]: E0114 14:36:00.553641 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.553976 kubelet[3240]: W0114 14:36:00.553653 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.553976 kubelet[3240]: E0114 14:36:00.553805 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.554618 containerd[1697]: time="2025-01-14T14:36:00.553172290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mf8qg,Uid:f82a5f35-7b8e-4fd0-9f3d-4360cda44855,Namespace:calico-system,Attempt:0,} returns sandbox id \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\"" Jan 14 14:36:00.555016 kubelet[3240]: E0114 14:36:00.553979 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.555016 kubelet[3240]: W0114 14:36:00.553988 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.555016 kubelet[3240]: E0114 14:36:00.554114 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.555016 kubelet[3240]: E0114 14:36:00.554382 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.555016 kubelet[3240]: W0114 14:36:00.554393 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.555016 kubelet[3240]: E0114 14:36:00.554762 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.555016 kubelet[3240]: E0114 14:36:00.555006 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.555016 kubelet[3240]: W0114 14:36:00.555019 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.556077 kubelet[3240]: E0114 14:36:00.555774 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.556077 kubelet[3240]: W0114 14:36:00.555785 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.556793 kubelet[3240]: E0114 14:36:00.556601 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.556793 kubelet[3240]: W0114 14:36:00.556614 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.556793 kubelet[3240]: E0114 14:36:00.556741 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.556793 kubelet[3240]: E0114 14:36:00.556761 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.556793 kubelet[3240]: E0114 14:36:00.556774 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.558043 kubelet[3240]: E0114 14:36:00.556966 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.558043 kubelet[3240]: W0114 14:36:00.556976 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.558043 kubelet[3240]: E0114 14:36:00.557014 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.558504 containerd[1697]: time="2025-01-14T14:36:00.557010506Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 14 14:36:00.558961 kubelet[3240]: E0114 14:36:00.558840 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.558961 kubelet[3240]: W0114 14:36:00.558856 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.558961 kubelet[3240]: E0114 14:36:00.558897 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.559457 kubelet[3240]: E0114 14:36:00.559240 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.559457 kubelet[3240]: W0114 14:36:00.559273 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.559457 kubelet[3240]: E0114 14:36:00.559341 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.559457 kubelet[3240]: E0114 14:36:00.559717 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.559457 kubelet[3240]: W0114 14:36:00.559729 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.559457 kubelet[3240]: E0114 14:36:00.559791 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.560264 kubelet[3240]: E0114 14:36:00.560073 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.560264 kubelet[3240]: W0114 14:36:00.560102 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.560264 kubelet[3240]: E0114 14:36:00.560187 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.561051 kubelet[3240]: E0114 14:36:00.560574 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.561051 kubelet[3240]: W0114 14:36:00.560589 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.561051 kubelet[3240]: E0114 14:36:00.560637 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.561051 kubelet[3240]: E0114 14:36:00.560942 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.561051 kubelet[3240]: W0114 14:36:00.560954 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.561051 kubelet[3240]: E0114 14:36:00.560995 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.561630 kubelet[3240]: E0114 14:36:00.561443 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.561630 kubelet[3240]: W0114 14:36:00.561458 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.561630 kubelet[3240]: E0114 14:36:00.561493 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.562485 kubelet[3240]: E0114 14:36:00.562183 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.562485 kubelet[3240]: W0114 14:36:00.562198 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.562485 kubelet[3240]: E0114 14:36:00.562214 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.573555 kubelet[3240]: E0114 14:36:00.573534 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.573744 kubelet[3240]: W0114 14:36:00.573688 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.573744 kubelet[3240]: E0114 14:36:00.573714 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.654979 kubelet[3240]: E0114 14:36:00.654806 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.654979 kubelet[3240]: W0114 14:36:00.654841 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.654979 kubelet[3240]: E0114 14:36:00.654871 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.755836 kubelet[3240]: E0114 14:36:00.755793 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.755836 kubelet[3240]: W0114 14:36:00.755824 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.756051 kubelet[3240]: E0114 14:36:00.755871 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.856400 kubelet[3240]: E0114 14:36:00.856362 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.856400 kubelet[3240]: W0114 14:36:00.856388 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.856625 kubelet[3240]: E0114 14:36:00.856414 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:00.958118 kubelet[3240]: E0114 14:36:00.957975 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:00.958118 kubelet[3240]: W0114 14:36:00.958006 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:00.958118 kubelet[3240]: E0114 14:36:00.958036 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.058723 kubelet[3240]: E0114 14:36:01.058686 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.058723 kubelet[3240]: W0114 14:36:01.058723 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.059211 kubelet[3240]: E0114 14:36:01.058754 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.160426 kubelet[3240]: E0114 14:36:01.160363 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.160426 kubelet[3240]: W0114 14:36:01.160411 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.160843 kubelet[3240]: E0114 14:36:01.160501 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.240332 kubelet[3240]: E0114 14:36:01.240200 3240 secret.go:194] Couldn't get secret calico-system/typha-certs: failed to sync secret cache: timed out waiting for the condition Jan 14 14:36:01.240332 kubelet[3240]: E0114 14:36:01.240319 3240 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/433a7798-027a-4867-a879-370c49ea7598-typha-certs podName:433a7798-027a-4867-a879-370c49ea7598 nodeName:}" failed. No retries permitted until 2025-01-14 14:36:01.740290136 +0000 UTC m=+25.834686901 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "typha-certs" (UniqueName: "kubernetes.io/secret/433a7798-027a-4867-a879-370c49ea7598-typha-certs") pod "calico-typha-549b88fb84-hxhl7" (UID: "433a7798-027a-4867-a879-370c49ea7598") : failed to sync secret cache: timed out waiting for the condition Jan 14 14:36:01.261809 kubelet[3240]: E0114 14:36:01.261778 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.261809 kubelet[3240]: W0114 14:36:01.261803 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.261964 kubelet[3240]: E0114 14:36:01.261830 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.362771 kubelet[3240]: E0114 14:36:01.362731 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.362771 kubelet[3240]: W0114 14:36:01.362759 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.363008 kubelet[3240]: E0114 14:36:01.362786 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.463839 kubelet[3240]: E0114 14:36:01.463801 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.463839 kubelet[3240]: W0114 14:36:01.463828 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.464029 kubelet[3240]: E0114 14:36:01.463855 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.565500 kubelet[3240]: E0114 14:36:01.565433 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.565705 kubelet[3240]: W0114 14:36:01.565514 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.565705 kubelet[3240]: E0114 14:36:01.565547 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.666668 kubelet[3240]: E0114 14:36:01.666615 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.666668 kubelet[3240]: W0114 14:36:01.666646 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.666668 kubelet[3240]: E0114 14:36:01.666679 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.767851 kubelet[3240]: E0114 14:36:01.767811 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.767851 kubelet[3240]: W0114 14:36:01.767839 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.768140 kubelet[3240]: E0114 14:36:01.767867 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.768307 kubelet[3240]: E0114 14:36:01.768283 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.768307 kubelet[3240]: W0114 14:36:01.768303 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.768519 kubelet[3240]: E0114 14:36:01.768325 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.768648 kubelet[3240]: E0114 14:36:01.768626 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.768648 kubelet[3240]: W0114 14:36:01.768644 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.768839 kubelet[3240]: E0114 14:36:01.768662 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.768933 kubelet[3240]: E0114 14:36:01.768900 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.768933 kubelet[3240]: W0114 14:36:01.768916 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.769059 kubelet[3240]: E0114 14:36:01.768932 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.769217 kubelet[3240]: E0114 14:36:01.769199 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.769217 kubelet[3240]: W0114 14:36:01.769214 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.769379 kubelet[3240]: E0114 14:36:01.769230 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.775223 kubelet[3240]: E0114 14:36:01.775136 3240 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 14:36:01.775223 kubelet[3240]: W0114 14:36:01.775158 3240 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 14:36:01.775223 kubelet[3240]: E0114 14:36:01.775175 3240 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 14:36:01.857021 containerd[1697]: time="2025-01-14T14:36:01.856355627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549b88fb84-hxhl7,Uid:433a7798-027a-4867-a879-370c49ea7598,Namespace:calico-system,Attempt:0,}" Jan 14 14:36:01.916986 containerd[1697]: time="2025-01-14T14:36:01.916805361Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:01.917225 containerd[1697]: time="2025-01-14T14:36:01.917138571Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:01.917864 containerd[1697]: time="2025-01-14T14:36:01.917661587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:01.917864 containerd[1697]: time="2025-01-14T14:36:01.917790991Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:01.948629 systemd[1]: Started cri-containerd-8880b020937078616add388a06ad6a4bf2c58649aea3686d8555721585a6ce30.scope - libcontainer container 8880b020937078616add388a06ad6a4bf2c58649aea3686d8555721585a6ce30. Jan 14 14:36:01.994282 containerd[1697]: time="2025-01-14T14:36:01.994228510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-549b88fb84-hxhl7,Uid:433a7798-027a-4867-a879-370c49ea7598,Namespace:calico-system,Attempt:0,} returns sandbox id \"8880b020937078616add388a06ad6a4bf2c58649aea3686d8555721585a6ce30\"" Jan 14 14:36:02.008132 kubelet[3240]: E0114 14:36:02.006816 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:02.252102 systemd[1]: run-containerd-runc-k8s.io-8880b020937078616add388a06ad6a4bf2c58649aea3686d8555721585a6ce30-runc.hPzIQv.mount: Deactivated successfully. Jan 14 14:36:04.006702 kubelet[3240]: E0114 14:36:04.006276 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:05.251949 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286670940.mount: Deactivated successfully. Jan 14 14:36:05.460189 containerd[1697]: time="2025-01-14T14:36:05.460139261Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:05.462614 containerd[1697]: time="2025-01-14T14:36:05.462567235Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 14 14:36:05.466310 containerd[1697]: time="2025-01-14T14:36:05.466248147Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:05.476188 containerd[1697]: time="2025-01-14T14:36:05.475145117Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:05.476188 containerd[1697]: time="2025-01-14T14:36:05.476016443Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 4.918967236s" Jan 14 14:36:05.476188 containerd[1697]: time="2025-01-14T14:36:05.476053744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 14 14:36:05.477796 containerd[1697]: time="2025-01-14T14:36:05.477765796Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 14 14:36:05.479626 containerd[1697]: time="2025-01-14T14:36:05.479588852Z" level=info msg="CreateContainer within sandbox \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 14:36:05.527357 containerd[1697]: time="2025-01-14T14:36:05.527203996Z" level=info msg="CreateContainer within sandbox \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145\"" Jan 14 14:36:05.529346 containerd[1697]: time="2025-01-14T14:36:05.527977120Z" level=info msg="StartContainer for \"bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145\"" Jan 14 14:36:05.560597 systemd[1]: Started cri-containerd-bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145.scope - libcontainer container bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145. Jan 14 14:36:05.591182 containerd[1697]: time="2025-01-14T14:36:05.590172807Z" level=info msg="StartContainer for \"bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145\" returns successfully" Jan 14 14:36:05.601996 systemd[1]: cri-containerd-bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145.scope: Deactivated successfully. Jan 14 14:36:06.007427 kubelet[3240]: E0114 14:36:06.006686 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:06.210939 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145-rootfs.mount: Deactivated successfully. Jan 14 14:36:06.393885 containerd[1697]: time="2025-01-14T14:36:06.393789400Z" level=info msg="shim disconnected" id=bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145 namespace=k8s.io Jan 14 14:36:06.393885 containerd[1697]: time="2025-01-14T14:36:06.393861502Z" level=warning msg="cleaning up after shim disconnected" id=bb8abab86e972fa578c793913338f4f9f2609467f721bf64cc03ab98ca47b145 namespace=k8s.io Jan 14 14:36:06.393885 containerd[1697]: time="2025-01-14T14:36:06.393873202Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:36:06.409984 containerd[1697]: time="2025-01-14T14:36:06.409908789Z" level=warning msg="cleanup warnings time=\"2025-01-14T14:36:06Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 14 14:36:08.009023 kubelet[3240]: E0114 14:36:08.007636 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:10.006228 kubelet[3240]: E0114 14:36:10.006152 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:10.682782 containerd[1697]: time="2025-01-14T14:36:10.682730961Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:10.687398 containerd[1697]: time="2025-01-14T14:36:10.687325500Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 14 14:36:10.693870 containerd[1697]: time="2025-01-14T14:36:10.692647262Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:10.701080 containerd[1697]: time="2025-01-14T14:36:10.701035517Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:10.701854 containerd[1697]: time="2025-01-14T14:36:10.701710737Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 5.223107716s" Jan 14 14:36:10.701854 containerd[1697]: time="2025-01-14T14:36:10.701748439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 14 14:36:10.703618 containerd[1697]: time="2025-01-14T14:36:10.703264485Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 14 14:36:10.721056 containerd[1697]: time="2025-01-14T14:36:10.721025724Z" level=info msg="CreateContainer within sandbox \"8880b020937078616add388a06ad6a4bf2c58649aea3686d8555721585a6ce30\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 14:36:10.775862 containerd[1697]: time="2025-01-14T14:36:10.775826688Z" level=info msg="CreateContainer within sandbox \"8880b020937078616add388a06ad6a4bf2c58649aea3686d8555721585a6ce30\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1d951526058cdb7c781e0d6b54baa33806cbe90fb460b550346a9e4c7e24a31f\"" Jan 14 14:36:10.776521 containerd[1697]: time="2025-01-14T14:36:10.776335504Z" level=info msg="StartContainer for \"1d951526058cdb7c781e0d6b54baa33806cbe90fb460b550346a9e4c7e24a31f\"" Jan 14 14:36:10.808620 systemd[1]: Started cri-containerd-1d951526058cdb7c781e0d6b54baa33806cbe90fb460b550346a9e4c7e24a31f.scope - libcontainer container 1d951526058cdb7c781e0d6b54baa33806cbe90fb460b550346a9e4c7e24a31f. Jan 14 14:36:10.856505 containerd[1697]: time="2025-01-14T14:36:10.856115527Z" level=info msg="StartContainer for \"1d951526058cdb7c781e0d6b54baa33806cbe90fb460b550346a9e4c7e24a31f\" returns successfully" Jan 14 14:36:11.148978 kubelet[3240]: I0114 14:36:11.148881 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-549b88fb84-hxhl7" podStartSLOduration=3.442990539 podStartE2EDuration="12.148859118s" podCreationTimestamp="2025-01-14 14:35:59 +0000 UTC" firstStartedPulling="2025-01-14 14:36:01.996924391 +0000 UTC m=+26.091321256" lastFinishedPulling="2025-01-14 14:36:10.70279307 +0000 UTC m=+34.797189835" observedRunningTime="2025-01-14 14:36:11.137441571 +0000 UTC m=+35.231838436" watchObservedRunningTime="2025-01-14 14:36:11.148859118 +0000 UTC m=+35.243255883" Jan 14 14:36:12.008495 kubelet[3240]: E0114 14:36:12.006940 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:14.006347 kubelet[3240]: E0114 14:36:14.006288 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:14.694854 containerd[1697]: time="2025-01-14T14:36:14.694796855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:14.697437 containerd[1697]: time="2025-01-14T14:36:14.697383332Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 14 14:36:14.701537 containerd[1697]: time="2025-01-14T14:36:14.701232547Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:14.709252 containerd[1697]: time="2025-01-14T14:36:14.709223087Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:14.709892 containerd[1697]: time="2025-01-14T14:36:14.709855906Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.006556921s" Jan 14 14:36:14.709980 containerd[1697]: time="2025-01-14T14:36:14.709910708Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 14 14:36:14.712788 containerd[1697]: time="2025-01-14T14:36:14.712757293Z" level=info msg="CreateContainer within sandbox \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 14:36:14.762070 containerd[1697]: time="2025-01-14T14:36:14.762028570Z" level=info msg="CreateContainer within sandbox \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4\"" Jan 14 14:36:14.763502 containerd[1697]: time="2025-01-14T14:36:14.762569286Z" level=info msg="StartContainer for \"6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4\"" Jan 14 14:36:14.797838 systemd[1]: Started cri-containerd-6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4.scope - libcontainer container 6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4. Jan 14 14:36:14.826093 containerd[1697]: time="2025-01-14T14:36:14.825975588Z" level=info msg="StartContainer for \"6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4\" returns successfully" Jan 14 14:36:16.007418 kubelet[3240]: E0114 14:36:16.007358 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:16.231920 containerd[1697]: time="2025-01-14T14:36:16.231868140Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 14:36:16.234333 systemd[1]: cri-containerd-6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4.scope: Deactivated successfully. Jan 14 14:36:16.257132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4-rootfs.mount: Deactivated successfully. Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.310036 3240 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.338753 3240 topology_manager.go:215] "Topology Admit Handler" podUID="e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a" podNamespace="kube-system" podName="coredns-7db6d8ff4d-gs5qf" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.341086 3240 topology_manager.go:215] "Topology Admit Handler" podUID="c33bf41e-f146-4bd5-b602-c1e913049366" podNamespace="kube-system" podName="coredns-7db6d8ff4d-tphr5" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.347306 3240 topology_manager.go:215] "Topology Admit Handler" podUID="f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e" podNamespace="calico-system" podName="calico-kube-controllers-7bb44f7d4c-kl6dd" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.348303 3240 topology_manager.go:215] "Topology Admit Handler" podUID="e8634fa1-fd6c-4670-aa78-0572c049583e" podNamespace="calico-apiserver" podName="calico-apiserver-77645db595-cvbcq" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.350563 3240 topology_manager.go:215] "Topology Admit Handler" podUID="82a33404-98b2-48ea-9d78-61b8e4c56093" podNamespace="calico-apiserver" podName="calico-apiserver-77645db595-fpj2w" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.369123 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e-tigera-ca-bundle\") pod \"calico-kube-controllers-7bb44f7d4c-kl6dd\" (UID: \"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e\") " pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" Jan 14 14:36:16.743967 kubelet[3240]: I0114 14:36:16.369162 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a-config-volume\") pod \"coredns-7db6d8ff4d-gs5qf\" (UID: \"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a\") " pod="kube-system/coredns-7db6d8ff4d-gs5qf" Jan 14 14:36:16.357198 systemd[1]: Created slice kubepods-burstable-pode83a3fe2_fe35_4285_8916_cdbbbbdf9a7a.slice - libcontainer container kubepods-burstable-pode83a3fe2_fe35_4285_8916_cdbbbbdf9a7a.slice. Jan 14 14:36:16.744941 kubelet[3240]: I0114 14:36:16.369186 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c33bf41e-f146-4bd5-b602-c1e913049366-config-volume\") pod \"coredns-7db6d8ff4d-tphr5\" (UID: \"c33bf41e-f146-4bd5-b602-c1e913049366\") " pod="kube-system/coredns-7db6d8ff4d-tphr5" Jan 14 14:36:16.744941 kubelet[3240]: I0114 14:36:16.369210 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/82a33404-98b2-48ea-9d78-61b8e4c56093-calico-apiserver-certs\") pod \"calico-apiserver-77645db595-fpj2w\" (UID: \"82a33404-98b2-48ea-9d78-61b8e4c56093\") " pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" Jan 14 14:36:16.744941 kubelet[3240]: I0114 14:36:16.369244 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbbl2\" (UniqueName: \"kubernetes.io/projected/e8634fa1-fd6c-4670-aa78-0572c049583e-kube-api-access-tbbl2\") pod \"calico-apiserver-77645db595-cvbcq\" (UID: \"e8634fa1-fd6c-4670-aa78-0572c049583e\") " pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" Jan 14 14:36:16.744941 kubelet[3240]: I0114 14:36:16.369268 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf4k8\" (UniqueName: \"kubernetes.io/projected/82a33404-98b2-48ea-9d78-61b8e4c56093-kube-api-access-mf4k8\") pod \"calico-apiserver-77645db595-fpj2w\" (UID: \"82a33404-98b2-48ea-9d78-61b8e4c56093\") " pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" Jan 14 14:36:16.744941 kubelet[3240]: I0114 14:36:16.369295 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/e8634fa1-fd6c-4670-aa78-0572c049583e-calico-apiserver-certs\") pod \"calico-apiserver-77645db595-cvbcq\" (UID: \"e8634fa1-fd6c-4670-aa78-0572c049583e\") " pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" Jan 14 14:36:16.368345 systemd[1]: Created slice kubepods-burstable-podc33bf41e_f146_4bd5_b602_c1e913049366.slice - libcontainer container kubepods-burstable-podc33bf41e_f146_4bd5_b602_c1e913049366.slice. Jan 14 14:36:16.745330 kubelet[3240]: I0114 14:36:16.369320 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sr9z6\" (UniqueName: \"kubernetes.io/projected/c33bf41e-f146-4bd5-b602-c1e913049366-kube-api-access-sr9z6\") pod \"coredns-7db6d8ff4d-tphr5\" (UID: \"c33bf41e-f146-4bd5-b602-c1e913049366\") " pod="kube-system/coredns-7db6d8ff4d-tphr5" Jan 14 14:36:16.745330 kubelet[3240]: I0114 14:36:16.369345 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-69skq\" (UniqueName: \"kubernetes.io/projected/f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e-kube-api-access-69skq\") pod \"calico-kube-controllers-7bb44f7d4c-kl6dd\" (UID: \"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e\") " pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" Jan 14 14:36:16.745330 kubelet[3240]: I0114 14:36:16.369377 3240 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x22ph\" (UniqueName: \"kubernetes.io/projected/e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a-kube-api-access-x22ph\") pod \"coredns-7db6d8ff4d-gs5qf\" (UID: \"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a\") " pod="kube-system/coredns-7db6d8ff4d-gs5qf" Jan 14 14:36:16.381913 systemd[1]: Created slice kubepods-besteffort-podf1d6f4c1_bcbc_47e7_b8d4_c5e9b1c5d26e.slice - libcontainer container kubepods-besteffort-podf1d6f4c1_bcbc_47e7_b8d4_c5e9b1c5d26e.slice. Jan 14 14:36:16.387266 systemd[1]: Created slice kubepods-besteffort-pod82a33404_98b2_48ea_9d78_61b8e4c56093.slice - libcontainer container kubepods-besteffort-pod82a33404_98b2_48ea_9d78_61b8e4c56093.slice. Jan 14 14:36:16.396632 systemd[1]: Created slice kubepods-besteffort-pode8634fa1_fd6c_4670_aa78_0572c049583e.slice - libcontainer container kubepods-besteffort-pode8634fa1_fd6c_4670_aa78_0572c049583e.slice. Jan 14 14:36:17.047743 containerd[1697]: time="2025-01-14T14:36:17.047693500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gs5qf,Uid:e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a,Namespace:kube-system,Attempt:0,}" Jan 14 14:36:17.053506 containerd[1697]: time="2025-01-14T14:36:17.053409371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb44f7d4c-kl6dd,Uid:f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e,Namespace:calico-system,Attempt:0,}" Jan 14 14:36:17.054973 containerd[1697]: time="2025-01-14T14:36:17.054940317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tphr5,Uid:c33bf41e-f146-4bd5-b602-c1e913049366,Namespace:kube-system,Attempt:0,}" Jan 14 14:36:17.063574 containerd[1697]: time="2025-01-14T14:36:17.063484873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-fpj2w,Uid:82a33404-98b2-48ea-9d78-61b8e4c56093,Namespace:calico-apiserver,Attempt:0,}" Jan 14 14:36:17.063712 containerd[1697]: time="2025-01-14T14:36:17.063484873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-cvbcq,Uid:e8634fa1-fd6c-4670-aa78-0572c049583e,Namespace:calico-apiserver,Attempt:0,}" Jan 14 14:36:18.018459 systemd[1]: Created slice kubepods-besteffort-pod09a9e4f5_6d4c_44f9_814c_0e031fb006c1.slice - libcontainer container kubepods-besteffort-pod09a9e4f5_6d4c_44f9_814c_0e031fb006c1.slice. Jan 14 14:36:18.022593 containerd[1697]: time="2025-01-14T14:36:18.022546028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jckhs,Uid:09a9e4f5-6d4c-44f9-814c-0e031fb006c1,Namespace:calico-system,Attempt:0,}" Jan 14 14:36:18.442223 containerd[1697]: time="2025-01-14T14:36:18.442112108Z" level=info msg="shim disconnected" id=6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4 namespace=k8s.io Jan 14 14:36:18.442223 containerd[1697]: time="2025-01-14T14:36:18.442193410Z" level=warning msg="cleaning up after shim disconnected" id=6774a200c695b048dc73d06bacb16b533feb02dadb958fccbf7d41da9be6dcb4 namespace=k8s.io Jan 14 14:36:18.442223 containerd[1697]: time="2025-01-14T14:36:18.442208211Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 14 14:36:18.792574 containerd[1697]: time="2025-01-14T14:36:18.792281307Z" level=error msg="Failed to destroy network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.794571 containerd[1697]: time="2025-01-14T14:36:18.793435941Z" level=error msg="encountered an error cleaning up failed sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.795766 containerd[1697]: time="2025-01-14T14:36:18.795604806Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gs5qf,Uid:e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.795944 kubelet[3240]: E0114 14:36:18.795880 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.797981 kubelet[3240]: E0114 14:36:18.795969 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gs5qf" Jan 14 14:36:18.797981 kubelet[3240]: E0114 14:36:18.796009 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-gs5qf" Jan 14 14:36:18.797981 kubelet[3240]: E0114 14:36:18.796068 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-gs5qf_kube-system(e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-gs5qf_kube-system(e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gs5qf" podUID="e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a" Jan 14 14:36:18.844932 containerd[1697]: time="2025-01-14T14:36:18.844637077Z" level=error msg="Failed to destroy network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.845950 containerd[1697]: time="2025-01-14T14:36:18.845884514Z" level=error msg="encountered an error cleaning up failed sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.846309 containerd[1697]: time="2025-01-14T14:36:18.846189923Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-cvbcq,Uid:e8634fa1-fd6c-4670-aa78-0572c049583e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.847000 kubelet[3240]: E0114 14:36:18.846941 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.847131 kubelet[3240]: E0114 14:36:18.847023 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" Jan 14 14:36:18.847131 kubelet[3240]: E0114 14:36:18.847051 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" Jan 14 14:36:18.847638 kubelet[3240]: E0114 14:36:18.847122 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77645db595-cvbcq_calico-apiserver(e8634fa1-fd6c-4670-aa78-0572c049583e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77645db595-cvbcq_calico-apiserver(e8634fa1-fd6c-4670-aa78-0572c049583e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" podUID="e8634fa1-fd6c-4670-aa78-0572c049583e" Jan 14 14:36:18.866920 containerd[1697]: time="2025-01-14T14:36:18.866849443Z" level=error msg="Failed to destroy network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.867740 containerd[1697]: time="2025-01-14T14:36:18.867693368Z" level=error msg="encountered an error cleaning up failed sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.867901 containerd[1697]: time="2025-01-14T14:36:18.867868273Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tphr5,Uid:c33bf41e-f146-4bd5-b602-c1e913049366,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.868745 kubelet[3240]: E0114 14:36:18.868298 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.868745 kubelet[3240]: E0114 14:36:18.868366 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tphr5" Jan 14 14:36:18.868745 kubelet[3240]: E0114 14:36:18.868393 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-tphr5" Jan 14 14:36:18.868976 kubelet[3240]: E0114 14:36:18.868453 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-tphr5_kube-system(c33bf41e-f146-4bd5-b602-c1e913049366)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-tphr5_kube-system(c33bf41e-f146-4bd5-b602-c1e913049366)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tphr5" podUID="c33bf41e-f146-4bd5-b602-c1e913049366" Jan 14 14:36:18.872289 containerd[1697]: time="2025-01-14T14:36:18.872162102Z" level=error msg="Failed to destroy network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.873018 containerd[1697]: time="2025-01-14T14:36:18.872887924Z" level=error msg="encountered an error cleaning up failed sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.874013 containerd[1697]: time="2025-01-14T14:36:18.873045628Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb44f7d4c-kl6dd,Uid:f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.874481 kubelet[3240]: E0114 14:36:18.874227 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.874481 kubelet[3240]: E0114 14:36:18.874282 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" Jan 14 14:36:18.874481 kubelet[3240]: E0114 14:36:18.874305 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" Jan 14 14:36:18.874658 kubelet[3240]: E0114 14:36:18.874349 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7bb44f7d4c-kl6dd_calico-system(f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7bb44f7d4c-kl6dd_calico-system(f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" podUID="f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e" Jan 14 14:36:18.885492 containerd[1697]: time="2025-01-14T14:36:18.885438900Z" level=error msg="Failed to destroy network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.885947 containerd[1697]: time="2025-01-14T14:36:18.885918914Z" level=error msg="encountered an error cleaning up failed sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.886697 containerd[1697]: time="2025-01-14T14:36:18.886157721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jckhs,Uid:09a9e4f5-6d4c-44f9-814c-0e031fb006c1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.886822 kubelet[3240]: E0114 14:36:18.886379 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.886822 kubelet[3240]: E0114 14:36:18.886427 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:18.886822 kubelet[3240]: E0114 14:36:18.886450 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jckhs" Jan 14 14:36:18.887001 kubelet[3240]: E0114 14:36:18.886507 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jckhs_calico-system(09a9e4f5-6d4c-44f9-814c-0e031fb006c1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jckhs_calico-system(09a9e4f5-6d4c-44f9-814c-0e031fb006c1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:18.888841 containerd[1697]: time="2025-01-14T14:36:18.888811601Z" level=error msg="Failed to destroy network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.889136 containerd[1697]: time="2025-01-14T14:36:18.889107410Z" level=error msg="encountered an error cleaning up failed sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.889220 containerd[1697]: time="2025-01-14T14:36:18.889166312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-fpj2w,Uid:82a33404-98b2-48ea-9d78-61b8e4c56093,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.889407 kubelet[3240]: E0114 14:36:18.889370 3240 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:18.889560 kubelet[3240]: E0114 14:36:18.889427 3240 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" Jan 14 14:36:18.889560 kubelet[3240]: E0114 14:36:18.889455 3240 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" Jan 14 14:36:18.889560 kubelet[3240]: E0114 14:36:18.889543 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-77645db595-fpj2w_calico-apiserver(82a33404-98b2-48ea-9d78-61b8e4c56093)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-77645db595-fpj2w_calico-apiserver(82a33404-98b2-48ea-9d78-61b8e4c56093)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" podUID="82a33404-98b2-48ea-9d78-61b8e4c56093" Jan 14 14:36:19.139986 kubelet[3240]: I0114 14:36:19.139940 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:19.141612 containerd[1697]: time="2025-01-14T14:36:19.141069164Z" level=info msg="StopPodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\"" Jan 14 14:36:19.141612 containerd[1697]: time="2025-01-14T14:36:19.141280571Z" level=info msg="Ensure that sandbox 65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5 in task-service has been cleanup successfully" Jan 14 14:36:19.146627 kubelet[3240]: I0114 14:36:19.146602 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:19.147806 containerd[1697]: time="2025-01-14T14:36:19.147459256Z" level=info msg="StopPodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\"" Jan 14 14:36:19.147806 containerd[1697]: time="2025-01-14T14:36:19.147660762Z" level=info msg="Ensure that sandbox 4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc in task-service has been cleanup successfully" Jan 14 14:36:19.150546 containerd[1697]: time="2025-01-14T14:36:19.150409344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 14 14:36:19.154637 kubelet[3240]: I0114 14:36:19.154519 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:19.155413 containerd[1697]: time="2025-01-14T14:36:19.154981981Z" level=info msg="StopPodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\"" Jan 14 14:36:19.155413 containerd[1697]: time="2025-01-14T14:36:19.155182387Z" level=info msg="Ensure that sandbox 4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7 in task-service has been cleanup successfully" Jan 14 14:36:19.167751 kubelet[3240]: I0114 14:36:19.167728 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:19.168600 containerd[1697]: time="2025-01-14T14:36:19.168533188Z" level=info msg="StopPodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\"" Jan 14 14:36:19.168765 containerd[1697]: time="2025-01-14T14:36:19.168734994Z" level=info msg="Ensure that sandbox 9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff in task-service has been cleanup successfully" Jan 14 14:36:19.175354 kubelet[3240]: I0114 14:36:19.175327 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:19.176580 containerd[1697]: time="2025-01-14T14:36:19.176005512Z" level=info msg="StopPodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\"" Jan 14 14:36:19.176580 containerd[1697]: time="2025-01-14T14:36:19.176250919Z" level=info msg="Ensure that sandbox 4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f in task-service has been cleanup successfully" Jan 14 14:36:19.210490 kubelet[3240]: I0114 14:36:19.209223 3240 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:19.212929 containerd[1697]: time="2025-01-14T14:36:19.211622580Z" level=info msg="StopPodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\"" Jan 14 14:36:19.214483 containerd[1697]: time="2025-01-14T14:36:19.214441764Z" level=info msg="Ensure that sandbox 3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc in task-service has been cleanup successfully" Jan 14 14:36:19.289000 containerd[1697]: time="2025-01-14T14:36:19.288928697Z" level=error msg="StopPodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" failed" error="failed to destroy network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:19.289723 kubelet[3240]: E0114 14:36:19.289674 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:19.290009 kubelet[3240]: E0114 14:36:19.289947 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5"} Jan 14 14:36:19.291170 kubelet[3240]: E0114 14:36:19.291081 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 14 14:36:19.291170 kubelet[3240]: E0114 14:36:19.291129 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-gs5qf" podUID="e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a" Jan 14 14:36:19.292749 containerd[1697]: time="2025-01-14T14:36:19.292697010Z" level=error msg="StopPodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" failed" error="failed to destroy network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:19.293192 kubelet[3240]: E0114 14:36:19.293049 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:19.293192 kubelet[3240]: E0114 14:36:19.293089 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc"} Jan 14 14:36:19.293192 kubelet[3240]: E0114 14:36:19.293125 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c33bf41e-f146-4bd5-b602-c1e913049366\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 14 14:36:19.293192 kubelet[3240]: E0114 14:36:19.293152 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c33bf41e-f146-4bd5-b602-c1e913049366\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-tphr5" podUID="c33bf41e-f146-4bd5-b602-c1e913049366" Jan 14 14:36:19.298585 containerd[1697]: time="2025-01-14T14:36:19.298549086Z" level=error msg="StopPodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" failed" error="failed to destroy network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:19.298788 kubelet[3240]: E0114 14:36:19.298753 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:19.298874 kubelet[3240]: E0114 14:36:19.298799 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f"} Jan 14 14:36:19.298874 kubelet[3240]: E0114 14:36:19.298840 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e8634fa1-fd6c-4670-aa78-0572c049583e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 14 14:36:19.298992 kubelet[3240]: E0114 14:36:19.298866 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e8634fa1-fd6c-4670-aa78-0572c049583e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" podUID="e8634fa1-fd6c-4670-aa78-0572c049583e" Jan 14 14:36:19.306638 containerd[1697]: time="2025-01-14T14:36:19.306591227Z" level=error msg="StopPodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" failed" error="failed to destroy network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:19.306920 kubelet[3240]: E0114 14:36:19.306785 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:19.306920 kubelet[3240]: E0114 14:36:19.306835 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7"} Jan 14 14:36:19.306920 kubelet[3240]: E0114 14:36:19.306869 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 14 14:36:19.306920 kubelet[3240]: E0114 14:36:19.306894 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" podUID="f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e" Jan 14 14:36:19.311693 containerd[1697]: time="2025-01-14T14:36:19.311658179Z" level=error msg="StopPodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" failed" error="failed to destroy network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:19.311892 kubelet[3240]: E0114 14:36:19.311800 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:19.311892 kubelet[3240]: E0114 14:36:19.311835 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff"} Jan 14 14:36:19.311892 kubelet[3240]: E0114 14:36:19.311873 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 14 14:36:19.312086 kubelet[3240]: E0114 14:36:19.311900 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"09a9e4f5-6d4c-44f9-814c-0e031fb006c1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jckhs" podUID="09a9e4f5-6d4c-44f9-814c-0e031fb006c1" Jan 14 14:36:19.312492 containerd[1697]: time="2025-01-14T14:36:19.312439602Z" level=error msg="StopPodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" failed" error="failed to destroy network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 14:36:19.312646 kubelet[3240]: E0114 14:36:19.312616 3240 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:19.312728 kubelet[3240]: E0114 14:36:19.312668 3240 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc"} Jan 14 14:36:19.312728 kubelet[3240]: E0114 14:36:19.312705 3240 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"82a33404-98b2-48ea-9d78-61b8e4c56093\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 14 14:36:19.312844 kubelet[3240]: E0114 14:36:19.312732 3240 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"82a33404-98b2-48ea-9d78-61b8e4c56093\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" podUID="82a33404-98b2-48ea-9d78-61b8e4c56093" Jan 14 14:36:19.600945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff-shm.mount: Deactivated successfully. Jan 14 14:36:19.601078 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f-shm.mount: Deactivated successfully. Jan 14 14:36:19.601161 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc-shm.mount: Deactivated successfully. Jan 14 14:36:19.601233 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc-shm.mount: Deactivated successfully. Jan 14 14:36:19.601310 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7-shm.mount: Deactivated successfully. Jan 14 14:36:19.601380 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5-shm.mount: Deactivated successfully. Jan 14 14:36:27.867360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount86033632.mount: Deactivated successfully. Jan 14 14:36:27.914012 containerd[1697]: time="2025-01-14T14:36:27.913957892Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:27.917205 containerd[1697]: time="2025-01-14T14:36:27.917153380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 14 14:36:27.922845 containerd[1697]: time="2025-01-14T14:36:27.922698933Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:27.928447 containerd[1697]: time="2025-01-14T14:36:27.928416691Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:27.929246 containerd[1697]: time="2025-01-14T14:36:27.929069109Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.778584962s" Jan 14 14:36:27.929246 containerd[1697]: time="2025-01-14T14:36:27.929116610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 14 14:36:27.947187 containerd[1697]: time="2025-01-14T14:36:27.947143206Z" level=info msg="CreateContainer within sandbox \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 14:36:28.010760 containerd[1697]: time="2025-01-14T14:36:28.010708057Z" level=info msg="CreateContainer within sandbox \"4fd76163ed7a72e8e76430b940982e40ea6392959d9c311ffad75bcb08b98ad4\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"708b8476fc8c38628abcb4486ac66cb233d013da2601bbf58d5b40a7b76c8a7b\"" Jan 14 14:36:28.011381 containerd[1697]: time="2025-01-14T14:36:28.011169470Z" level=info msg="StartContainer for \"708b8476fc8c38628abcb4486ac66cb233d013da2601bbf58d5b40a7b76c8a7b\"" Jan 14 14:36:28.041638 systemd[1]: Started cri-containerd-708b8476fc8c38628abcb4486ac66cb233d013da2601bbf58d5b40a7b76c8a7b.scope - libcontainer container 708b8476fc8c38628abcb4486ac66cb233d013da2601bbf58d5b40a7b76c8a7b. Jan 14 14:36:28.072695 containerd[1697]: time="2025-01-14T14:36:28.072527160Z" level=info msg="StartContainer for \"708b8476fc8c38628abcb4486ac66cb233d013da2601bbf58d5b40a7b76c8a7b\" returns successfully" Jan 14 14:36:28.180069 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 14:36:28.180251 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 14:36:29.837515 kernel: bpftool[4504]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 14 14:36:30.075416 systemd-networkd[1575]: vxlan.calico: Link UP Jan 14 14:36:30.075427 systemd-networkd[1575]: vxlan.calico: Gained carrier Jan 14 14:36:31.007929 containerd[1697]: time="2025-01-14T14:36:31.007423088Z" level=info msg="StopPodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\"" Jan 14 14:36:31.057688 kubelet[3240]: I0114 14:36:31.056777 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mf8qg" podStartSLOduration=3.6829822869999997 podStartE2EDuration="31.056750038s" podCreationTimestamp="2025-01-14 14:36:00 +0000 UTC" firstStartedPulling="2025-01-14 14:36:00.556342286 +0000 UTC m=+24.650739051" lastFinishedPulling="2025-01-14 14:36:27.930109937 +0000 UTC m=+52.024506802" observedRunningTime="2025-01-14 14:36:28.270438412 +0000 UTC m=+52.364835177" watchObservedRunningTime="2025-01-14 14:36:31.056750038 +0000 UTC m=+55.151146903" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.054 [INFO][4588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.055 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" iface="eth0" netns="/var/run/netns/cni-a766261e-1046-014e-4ac0-552d2833602a" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.055 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" iface="eth0" netns="/var/run/netns/cni-a766261e-1046-014e-4ac0-552d2833602a" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.056 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" iface="eth0" netns="/var/run/netns/cni-a766261e-1046-014e-4ac0-552d2833602a" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.056 [INFO][4588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.056 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.077 [INFO][4594] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.077 [INFO][4594] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.077 [INFO][4594] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.083 [WARNING][4594] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.083 [INFO][4594] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.084 [INFO][4594] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:31.087823 containerd[1697]: 2025-01-14 14:36:31.086 [INFO][4588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:31.090231 containerd[1697]: time="2025-01-14T14:36:31.089555202Z" level=info msg="TearDown network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" successfully" Jan 14 14:36:31.090231 containerd[1697]: time="2025-01-14T14:36:31.089605204Z" level=info msg="StopPodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" returns successfully" Jan 14 14:36:31.090539 containerd[1697]: time="2025-01-14T14:36:31.090458829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-cvbcq,Uid:e8634fa1-fd6c-4670-aa78-0572c049583e,Namespace:calico-apiserver,Attempt:1,}" Jan 14 14:36:31.093010 systemd[1]: run-netns-cni\x2da766261e\x2d1046\x2d014e\x2d4ac0\x2d552d2833602a.mount: Deactivated successfully. Jan 14 14:36:31.234729 systemd-networkd[1575]: vxlan.calico: Gained IPv6LL Jan 14 14:36:31.276864 systemd-networkd[1575]: cali54c3736ff01: Link UP Jan 14 14:36:31.278238 systemd-networkd[1575]: cali54c3736ff01: Gained carrier Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.184 [INFO][4601] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0 calico-apiserver-77645db595- calico-apiserver e8634fa1-fd6c-4670-aa78-0572c049583e 812 0 2025-01-14 14:36:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77645db595 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-0bb245c6fa calico-apiserver-77645db595-cvbcq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali54c3736ff01 [] []}} ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.185 [INFO][4601] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.224 [INFO][4611] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" HandleID="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.236 [INFO][4611] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" HandleID="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003199e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-0bb245c6fa", "pod":"calico-apiserver-77645db595-cvbcq", "timestamp":"2025-01-14 14:36:31.224253862 +0000 UTC"}, Hostname:"ci-4081.3.0-a-0bb245c6fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.236 [INFO][4611] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.236 [INFO][4611] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.236 [INFO][4611] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-0bb245c6fa' Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.238 [INFO][4611] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.243 [INFO][4611] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.249 [INFO][4611] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.251 [INFO][4611] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.253 [INFO][4611] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.253 [INFO][4611] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.254 [INFO][4611] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.260 [INFO][4611] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.267 [INFO][4611] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.65/26] block=192.168.34.64/26 handle="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.268 [INFO][4611] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.65/26] handle="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.268 [INFO][4611] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:31.304019 containerd[1697]: 2025-01-14 14:36:31.268 [INFO][4611] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.65/26] IPv6=[] ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" HandleID="k8s-pod-network.118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.305085 containerd[1697]: 2025-01-14 14:36:31.270 [INFO][4601] cni-plugin/k8s.go 386: Populated endpoint ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8634fa1-fd6c-4670-aa78-0572c049583e", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"", Pod:"calico-apiserver-77645db595-cvbcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54c3736ff01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:31.305085 containerd[1697]: 2025-01-14 14:36:31.271 [INFO][4601] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.65/32] ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.305085 containerd[1697]: 2025-01-14 14:36:31.271 [INFO][4601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali54c3736ff01 ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.305085 containerd[1697]: 2025-01-14 14:36:31.279 [INFO][4601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.305085 containerd[1697]: 2025-01-14 14:36:31.280 [INFO][4601] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8634fa1-fd6c-4670-aa78-0572c049583e", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d", Pod:"calico-apiserver-77645db595-cvbcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54c3736ff01", MAC:"6e:aa:3c:b9:17:6e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:31.305085 containerd[1697]: 2025-01-14 14:36:31.299 [INFO][4601] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-cvbcq" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:31.335882 containerd[1697]: time="2025-01-14T14:36:31.335094820Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:31.335882 containerd[1697]: time="2025-01-14T14:36:31.335824341Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:31.336195 containerd[1697]: time="2025-01-14T14:36:31.335845242Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:31.336195 containerd[1697]: time="2025-01-14T14:36:31.336010647Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:31.365639 systemd[1]: Started cri-containerd-118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d.scope - libcontainer container 118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d. Jan 14 14:36:31.411897 containerd[1697]: time="2025-01-14T14:36:31.411847476Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-cvbcq,Uid:e8634fa1-fd6c-4670-aa78-0572c049583e,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d\"" Jan 14 14:36:31.413974 containerd[1697]: time="2025-01-14T14:36:31.413716531Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 14 14:36:32.009409 containerd[1697]: time="2025-01-14T14:36:32.008191204Z" level=info msg="StopPodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\"" Jan 14 14:36:32.011099 containerd[1697]: time="2025-01-14T14:36:32.010612876Z" level=info msg="StopPodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\"" Jan 14 14:36:32.017075 containerd[1697]: time="2025-01-14T14:36:32.016673854Z" level=info msg="StopPodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\"" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.155 [INFO][4713] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.155 [INFO][4713] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" iface="eth0" netns="/var/run/netns/cni-5aabc846-64d6-7a98-6e39-2d200e1728cb" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.156 [INFO][4713] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" iface="eth0" netns="/var/run/netns/cni-5aabc846-64d6-7a98-6e39-2d200e1728cb" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.159 [INFO][4713] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" iface="eth0" netns="/var/run/netns/cni-5aabc846-64d6-7a98-6e39-2d200e1728cb" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.160 [INFO][4713] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.160 [INFO][4713] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.199 [INFO][4736] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.200 [INFO][4736] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.200 [INFO][4736] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.212 [WARNING][4736] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.212 [INFO][4736] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.215 [INFO][4736] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:32.221748 containerd[1697]: 2025-01-14 14:36:32.218 [INFO][4713] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:32.222398 containerd[1697]: time="2025-01-14T14:36:32.222162094Z" level=info msg="TearDown network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" successfully" Jan 14 14:36:32.222398 containerd[1697]: time="2025-01-14T14:36:32.222203495Z" level=info msg="StopPodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" returns successfully" Jan 14 14:36:32.227896 systemd[1]: run-netns-cni\x2d5aabc846\x2d64d6\x2d7a98\x2d6e39\x2d2d200e1728cb.mount: Deactivated successfully. Jan 14 14:36:32.235088 containerd[1697]: time="2025-01-14T14:36:32.233835037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jckhs,Uid:09a9e4f5-6d4c-44f9-814c-0e031fb006c1,Namespace:calico-system,Attempt:1,}" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.150 [INFO][4714] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.152 [INFO][4714] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" iface="eth0" netns="/var/run/netns/cni-b5f92c86-70af-0815-f23a-73e35fd52060" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.152 [INFO][4714] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" iface="eth0" netns="/var/run/netns/cni-b5f92c86-70af-0815-f23a-73e35fd52060" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.152 [INFO][4714] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" iface="eth0" netns="/var/run/netns/cni-b5f92c86-70af-0815-f23a-73e35fd52060" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.152 [INFO][4714] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.152 [INFO][4714] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.229 [INFO][4732] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.229 [INFO][4732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.229 [INFO][4732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.245 [WARNING][4732] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.245 [INFO][4732] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.247 [INFO][4732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:32.250274 containerd[1697]: 2025-01-14 14:36:32.248 [INFO][4714] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:32.251258 containerd[1697]: time="2025-01-14T14:36:32.251060843Z" level=info msg="TearDown network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" successfully" Jan 14 14:36:32.251258 containerd[1697]: time="2025-01-14T14:36:32.251105745Z" level=info msg="StopPodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" returns successfully" Jan 14 14:36:32.255963 containerd[1697]: time="2025-01-14T14:36:32.254764452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tphr5,Uid:c33bf41e-f146-4bd5-b602-c1e913049366,Namespace:kube-system,Attempt:1,}" Jan 14 14:36:32.256869 systemd[1]: run-netns-cni\x2db5f92c86\x2d70af\x2d0815\x2df23a\x2d73e35fd52060.mount: Deactivated successfully. Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.146 [INFO][4715] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.146 [INFO][4715] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" iface="eth0" netns="/var/run/netns/cni-112b2f26-475c-ea31-65a0-6f282c35e1e7" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.147 [INFO][4715] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" iface="eth0" netns="/var/run/netns/cni-112b2f26-475c-ea31-65a0-6f282c35e1e7" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.147 [INFO][4715] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" iface="eth0" netns="/var/run/netns/cni-112b2f26-475c-ea31-65a0-6f282c35e1e7" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.147 [INFO][4715] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.147 [INFO][4715] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.238 [INFO][4731] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.238 [INFO][4731] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.247 [INFO][4731] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.261 [WARNING][4731] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.261 [INFO][4731] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.263 [INFO][4731] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:32.265820 containerd[1697]: 2025-01-14 14:36:32.264 [INFO][4715] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:32.268391 containerd[1697]: time="2025-01-14T14:36:32.267552628Z" level=info msg="TearDown network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" successfully" Jan 14 14:36:32.268391 containerd[1697]: time="2025-01-14T14:36:32.267610530Z" level=info msg="StopPodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" returns successfully" Jan 14 14:36:32.269143 containerd[1697]: time="2025-01-14T14:36:32.268866367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-fpj2w,Uid:82a33404-98b2-48ea-9d78-61b8e4c56093,Namespace:calico-apiserver,Attempt:1,}" Jan 14 14:36:32.270638 systemd[1]: run-netns-cni\x2d112b2f26\x2d475c\x2dea31\x2d65a0\x2d6f282c35e1e7.mount: Deactivated successfully. Jan 14 14:36:32.386460 systemd-networkd[1575]: cali54c3736ff01: Gained IPv6LL Jan 14 14:36:32.531388 systemd-networkd[1575]: caliba7442e9fb6: Link UP Jan 14 14:36:32.533740 systemd-networkd[1575]: caliba7442e9fb6: Gained carrier Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.406 [INFO][4749] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0 csi-node-driver- calico-system 09a9e4f5-6d4c-44f9-814c-0e031fb006c1 825 0 2025-01-14 14:36:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.3.0-a-0bb245c6fa csi-node-driver-jckhs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliba7442e9fb6 [] []}} ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.406 [INFO][4749] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.438 [INFO][4760] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" HandleID="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.455 [INFO][4760] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" HandleID="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2b10), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-0bb245c6fa", "pod":"csi-node-driver-jckhs", "timestamp":"2025-01-14 14:36:32.438030439 +0000 UTC"}, Hostname:"ci-4081.3.0-a-0bb245c6fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.456 [INFO][4760] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.456 [INFO][4760] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.456 [INFO][4760] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-0bb245c6fa' Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.459 [INFO][4760] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.467 [INFO][4760] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.482 [INFO][4760] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.485 [INFO][4760] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.491 [INFO][4760] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.491 [INFO][4760] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.496 [INFO][4760] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.503 [INFO][4760] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.520 [INFO][4760] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.66/26] block=192.168.34.64/26 handle="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.522 [INFO][4760] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.66/26] handle="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.522 [INFO][4760] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:32.573310 containerd[1697]: 2025-01-14 14:36:32.522 [INFO][4760] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.66/26] IPv6=[] ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" HandleID="k8s-pod-network.0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.576371 containerd[1697]: 2025-01-14 14:36:32.525 [INFO][4749] cni-plugin/k8s.go 386: Populated endpoint ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09a9e4f5-6d4c-44f9-814c-0e031fb006c1", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"", Pod:"csi-node-driver-jckhs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba7442e9fb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:32.576371 containerd[1697]: 2025-01-14 14:36:32.525 [INFO][4749] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.66/32] ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.576371 containerd[1697]: 2025-01-14 14:36:32.526 [INFO][4749] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliba7442e9fb6 ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.576371 containerd[1697]: 2025-01-14 14:36:32.535 [INFO][4749] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.576371 containerd[1697]: 2025-01-14 14:36:32.536 [INFO][4749] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09a9e4f5-6d4c-44f9-814c-0e031fb006c1", ResourceVersion:"825", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c", Pod:"csi-node-driver-jckhs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba7442e9fb6", MAC:"ce:98:0a:5a:82:5e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:32.576371 containerd[1697]: 2025-01-14 14:36:32.560 [INFO][4749] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c" Namespace="calico-system" Pod="csi-node-driver-jckhs" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:32.630172 containerd[1697]: time="2025-01-14T14:36:32.629741874Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:32.630172 containerd[1697]: time="2025-01-14T14:36:32.629807876Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:32.630172 containerd[1697]: time="2025-01-14T14:36:32.629840677Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:32.630172 containerd[1697]: time="2025-01-14T14:36:32.629974281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:32.661646 systemd[1]: Started cri-containerd-0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c.scope - libcontainer container 0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c. Jan 14 14:36:32.681587 systemd-networkd[1575]: cali5a4aaf61539: Link UP Jan 14 14:36:32.681846 systemd-networkd[1575]: cali5a4aaf61539: Gained carrier Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.506 [INFO][4765] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0 coredns-7db6d8ff4d- kube-system c33bf41e-f146-4bd5-b602-c1e913049366 824 0 2025-01-14 14:35:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-0bb245c6fa coredns-7db6d8ff4d-tphr5 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5a4aaf61539 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.507 [INFO][4765] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.593 [INFO][4790] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" HandleID="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.609 [INFO][4790] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" HandleID="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034faf0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-0bb245c6fa", "pod":"coredns-7db6d8ff4d-tphr5", "timestamp":"2025-01-14 14:36:32.593146798 +0000 UTC"}, Hostname:"ci-4081.3.0-a-0bb245c6fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.610 [INFO][4790] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.610 [INFO][4790] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.611 [INFO][4790] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-0bb245c6fa' Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.614 [INFO][4790] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.621 [INFO][4790] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.628 [INFO][4790] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.631 [INFO][4790] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.634 [INFO][4790] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.634 [INFO][4790] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.636 [INFO][4790] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.646 [INFO][4790] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.661 [INFO][4790] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.67/26] block=192.168.34.64/26 handle="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.661 [INFO][4790] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.67/26] handle="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.661 [INFO][4790] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:32.721021 containerd[1697]: 2025-01-14 14:36:32.661 [INFO][4790] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.67/26] IPv6=[] ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" HandleID="k8s-pod-network.1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.723080 containerd[1697]: 2025-01-14 14:36:32.673 [INFO][4765] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c33bf41e-f146-4bd5-b602-c1e913049366", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"", Pod:"coredns-7db6d8ff4d-tphr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a4aaf61539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:32.723080 containerd[1697]: 2025-01-14 14:36:32.673 [INFO][4765] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.67/32] ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.723080 containerd[1697]: 2025-01-14 14:36:32.673 [INFO][4765] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5a4aaf61539 ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.723080 containerd[1697]: 2025-01-14 14:36:32.681 [INFO][4765] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.723080 containerd[1697]: 2025-01-14 14:36:32.681 [INFO][4765] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c33bf41e-f146-4bd5-b602-c1e913049366", ResourceVersion:"824", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e", Pod:"coredns-7db6d8ff4d-tphr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a4aaf61539", MAC:"06:be:0d:cc:88:c9", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:32.723080 containerd[1697]: 2025-01-14 14:36:32.708 [INFO][4765] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e" Namespace="kube-system" Pod="coredns-7db6d8ff4d-tphr5" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:32.726503 containerd[1697]: time="2025-01-14T14:36:32.726251911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jckhs,Uid:09a9e4f5-6d4c-44f9-814c-0e031fb006c1,Namespace:calico-system,Attempt:1,} returns sandbox id \"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c\"" Jan 14 14:36:32.750342 systemd-networkd[1575]: calic423024cb94: Link UP Jan 14 14:36:32.752922 systemd-networkd[1575]: calic423024cb94: Gained carrier Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.529 [INFO][4770] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0 calico-apiserver-77645db595- calico-apiserver 82a33404-98b2-48ea-9d78-61b8e4c56093 823 0 2025-01-14 14:36:00 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:77645db595 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.3.0-a-0bb245c6fa calico-apiserver-77645db595-fpj2w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic423024cb94 [] []}} ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.533 [INFO][4770] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.633 [INFO][4798] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" HandleID="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.662 [INFO][4798] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" HandleID="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00038d090), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.3.0-a-0bb245c6fa", "pod":"calico-apiserver-77645db595-fpj2w", "timestamp":"2025-01-14 14:36:32.633223376 +0000 UTC"}, Hostname:"ci-4081.3.0-a-0bb245c6fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.662 [INFO][4798] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.662 [INFO][4798] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.663 [INFO][4798] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-0bb245c6fa' Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.665 [INFO][4798] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.672 [INFO][4798] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.686 [INFO][4798] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.692 [INFO][4798] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.707 [INFO][4798] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.708 [INFO][4798] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.712 [INFO][4798] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6 Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.726 [INFO][4798] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.743 [INFO][4798] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.68/26] block=192.168.34.64/26 handle="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.743 [INFO][4798] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.68/26] handle="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.743 [INFO][4798] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:32.797498 containerd[1697]: 2025-01-14 14:36:32.743 [INFO][4798] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.68/26] IPv6=[] ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" HandleID="k8s-pod-network.1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.798950 containerd[1697]: 2025-01-14 14:36:32.746 [INFO][4770] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"82a33404-98b2-48ea-9d78-61b8e4c56093", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"", Pod:"calico-apiserver-77645db595-fpj2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic423024cb94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:32.798950 containerd[1697]: 2025-01-14 14:36:32.746 [INFO][4770] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.68/32] ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.798950 containerd[1697]: 2025-01-14 14:36:32.746 [INFO][4770] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic423024cb94 ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.798950 containerd[1697]: 2025-01-14 14:36:32.753 [INFO][4770] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.798950 containerd[1697]: 2025-01-14 14:36:32.754 [INFO][4770] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"82a33404-98b2-48ea-9d78-61b8e4c56093", ResourceVersion:"823", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6", Pod:"calico-apiserver-77645db595-fpj2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic423024cb94", MAC:"4e:c3:f1:00:b1:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:32.798950 containerd[1697]: 2025-01-14 14:36:32.781 [INFO][4770] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6" Namespace="calico-apiserver" Pod="calico-apiserver-77645db595-fpj2w" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:32.809321 containerd[1697]: time="2025-01-14T14:36:32.807686104Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:32.809321 containerd[1697]: time="2025-01-14T14:36:32.807761707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:32.809321 containerd[1697]: time="2025-01-14T14:36:32.807799708Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:32.809321 containerd[1697]: time="2025-01-14T14:36:32.807902411Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:32.853320 systemd[1]: Started cri-containerd-1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e.scope - libcontainer container 1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e. Jan 14 14:36:32.870303 containerd[1697]: time="2025-01-14T14:36:32.866548935Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:32.870303 containerd[1697]: time="2025-01-14T14:36:32.866624137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:32.870303 containerd[1697]: time="2025-01-14T14:36:32.867029049Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:32.870303 containerd[1697]: time="2025-01-14T14:36:32.867166953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:32.907928 systemd[1]: Started cri-containerd-1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6.scope - libcontainer container 1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6. Jan 14 14:36:32.993442 containerd[1697]: time="2025-01-14T14:36:32.993320661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-tphr5,Uid:c33bf41e-f146-4bd5-b602-c1e913049366,Namespace:kube-system,Attempt:1,} returns sandbox id \"1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e\"" Jan 14 14:36:32.998841 containerd[1697]: time="2025-01-14T14:36:32.998792222Z" level=info msg="CreateContainer within sandbox \"1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 14:36:33.055353 containerd[1697]: time="2025-01-14T14:36:33.053848840Z" level=info msg="CreateContainer within sandbox \"1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9173db23bd650fe826861354e805ccca75a6ad79c13f1edf37dfe9b07fbe2e01\"" Jan 14 14:36:33.065781 containerd[1697]: time="2025-01-14T14:36:33.065722789Z" level=info msg="StartContainer for \"9173db23bd650fe826861354e805ccca75a6ad79c13f1edf37dfe9b07fbe2e01\"" Jan 14 14:36:33.073913 containerd[1697]: time="2025-01-14T14:36:33.073719224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-77645db595-fpj2w,Uid:82a33404-98b2-48ea-9d78-61b8e4c56093,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6\"" Jan 14 14:36:33.111705 systemd[1]: Started cri-containerd-9173db23bd650fe826861354e805ccca75a6ad79c13f1edf37dfe9b07fbe2e01.scope - libcontainer container 9173db23bd650fe826861354e805ccca75a6ad79c13f1edf37dfe9b07fbe2e01. Jan 14 14:36:33.160166 containerd[1697]: time="2025-01-14T14:36:33.160109163Z" level=info msg="StartContainer for \"9173db23bd650fe826861354e805ccca75a6ad79c13f1edf37dfe9b07fbe2e01\" returns successfully" Jan 14 14:36:33.281085 kubelet[3240]: I0114 14:36:33.280762 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-tphr5" podStartSLOduration=40.280737609 podStartE2EDuration="40.280737609s" podCreationTimestamp="2025-01-14 14:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:36:33.279810382 +0000 UTC m=+57.374207247" watchObservedRunningTime="2025-01-14 14:36:33.280737609 +0000 UTC m=+57.375134474" Jan 14 14:36:34.009970 containerd[1697]: time="2025-01-14T14:36:34.008666406Z" level=info msg="StopPodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\"" Jan 14 14:36:34.011100 containerd[1697]: time="2025-01-14T14:36:34.011058676Z" level=info msg="StopPodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\"" Jan 14 14:36:34.049695 systemd-networkd[1575]: caliba7442e9fb6: Gained IPv6LL Jan 14 14:36:34.051889 systemd-networkd[1575]: cali5a4aaf61539: Gained IPv6LL Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.125 [INFO][5033] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.125 [INFO][5033] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" iface="eth0" netns="/var/run/netns/cni-3abd94a1-dbf1-ecc1-87e7-b57fa582469a" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.126 [INFO][5033] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" iface="eth0" netns="/var/run/netns/cni-3abd94a1-dbf1-ecc1-87e7-b57fa582469a" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.128 [INFO][5033] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" iface="eth0" netns="/var/run/netns/cni-3abd94a1-dbf1-ecc1-87e7-b57fa582469a" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.128 [INFO][5033] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.128 [INFO][5033] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.171 [INFO][5046] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.171 [INFO][5046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.171 [INFO][5046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.182 [WARNING][5046] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.182 [INFO][5046] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.185 [INFO][5046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:34.190972 containerd[1697]: 2025-01-14 14:36:34.187 [INFO][5033] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:34.197800 containerd[1697]: time="2025-01-14T14:36:34.196750034Z" level=info msg="TearDown network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" successfully" Jan 14 14:36:34.197800 containerd[1697]: time="2025-01-14T14:36:34.196791335Z" level=info msg="StopPodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" returns successfully" Jan 14 14:36:34.196943 systemd[1]: run-netns-cni\x2d3abd94a1\x2ddbf1\x2decc1\x2d87e7\x2db57fa582469a.mount: Deactivated successfully. Jan 14 14:36:34.198988 containerd[1697]: time="2025-01-14T14:36:34.198957499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gs5qf,Uid:e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a,Namespace:kube-system,Attempt:1,}" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.129 [INFO][5034] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.130 [INFO][5034] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" iface="eth0" netns="/var/run/netns/cni-a474eb52-8e5c-2b83-2338-762b68da9cf3" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.130 [INFO][5034] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" iface="eth0" netns="/var/run/netns/cni-a474eb52-8e5c-2b83-2338-762b68da9cf3" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.133 [INFO][5034] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" iface="eth0" netns="/var/run/netns/cni-a474eb52-8e5c-2b83-2338-762b68da9cf3" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.133 [INFO][5034] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.133 [INFO][5034] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.198 [INFO][5047] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.201 [INFO][5047] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.201 [INFO][5047] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.211 [WARNING][5047] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.211 [INFO][5047] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.214 [INFO][5047] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:34.219320 containerd[1697]: 2025-01-14 14:36:34.217 [INFO][5034] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:34.221753 containerd[1697]: time="2025-01-14T14:36:34.221414659Z" level=info msg="TearDown network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" successfully" Jan 14 14:36:34.221753 containerd[1697]: time="2025-01-14T14:36:34.221747269Z" level=info msg="StopPodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" returns successfully" Jan 14 14:36:34.224007 containerd[1697]: time="2025-01-14T14:36:34.223910533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb44f7d4c-kl6dd,Uid:f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e,Namespace:calico-system,Attempt:1,}" Jan 14 14:36:34.225405 systemd[1]: run-netns-cni\x2da474eb52\x2d8e5c\x2d2b83\x2d2338\x2d762b68da9cf3.mount: Deactivated successfully. Jan 14 14:36:34.600393 systemd-networkd[1575]: calid750418ebec: Link UP Jan 14 14:36:34.602641 systemd-networkd[1575]: calid750418ebec: Gained carrier Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.441 [INFO][5059] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0 coredns-7db6d8ff4d- kube-system e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a 853 0 2025-01-14 14:35:53 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.3.0-a-0bb245c6fa coredns-7db6d8ff4d-gs5qf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid750418ebec [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.441 [INFO][5059] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.520 [INFO][5083] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" HandleID="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.543 [INFO][5083] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" HandleID="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00036d130), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.3.0-a-0bb245c6fa", "pod":"coredns-7db6d8ff4d-gs5qf", "timestamp":"2025-01-14 14:36:34.52049545 +0000 UTC"}, Hostname:"ci-4081.3.0-a-0bb245c6fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.543 [INFO][5083] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.543 [INFO][5083] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.544 [INFO][5083] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-0bb245c6fa' Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.551 [INFO][5083] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.558 [INFO][5083] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.568 [INFO][5083] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.571 [INFO][5083] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.574 [INFO][5083] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.574 [INFO][5083] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.575 [INFO][5083] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.581 [INFO][5083] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.592 [INFO][5083] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.69/26] block=192.168.34.64/26 handle="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.592 [INFO][5083] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.69/26] handle="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.592 [INFO][5083] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:34.634574 containerd[1697]: 2025-01-14 14:36:34.592 [INFO][5083] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.69/26] IPv6=[] ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" HandleID="k8s-pod-network.c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.635568 containerd[1697]: 2025-01-14 14:36:34.595 [INFO][5059] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"", Pod:"coredns-7db6d8ff4d-gs5qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid750418ebec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:34.635568 containerd[1697]: 2025-01-14 14:36:34.595 [INFO][5059] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.69/32] ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.635568 containerd[1697]: 2025-01-14 14:36:34.595 [INFO][5059] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid750418ebec ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.635568 containerd[1697]: 2025-01-14 14:36:34.601 [INFO][5059] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.635568 containerd[1697]: 2025-01-14 14:36:34.604 [INFO][5059] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a", ResourceVersion:"853", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d", Pod:"coredns-7db6d8ff4d-gs5qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid750418ebec", MAC:"9a:c1:0b:12:ce:a3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:34.635568 containerd[1697]: 2025-01-14 14:36:34.630 [INFO][5059] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d" Namespace="kube-system" Pod="coredns-7db6d8ff4d-gs5qf" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:34.666123 containerd[1697]: time="2025-01-14T14:36:34.664796292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:34.671233 containerd[1697]: time="2025-01-14T14:36:34.670492459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 14 14:36:34.675005 containerd[1697]: time="2025-01-14T14:36:34.674919289Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:34.691313 systemd-networkd[1575]: calia16e8aa82d8: Link UP Jan 14 14:36:34.692935 systemd-networkd[1575]: calia16e8aa82d8: Gained carrier Jan 14 14:36:34.705012 containerd[1697]: time="2025-01-14T14:36:34.704275452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:34.706097 containerd[1697]: time="2025-01-14T14:36:34.705568190Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:34.707123 containerd[1697]: time="2025-01-14T14:36:34.707057534Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.293299402s" Jan 14 14:36:34.707123 containerd[1697]: time="2025-01-14T14:36:34.707106735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 14 14:36:34.708047 containerd[1697]: time="2025-01-14T14:36:34.707779355Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:34.708047 containerd[1697]: time="2025-01-14T14:36:34.707807056Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:34.708047 containerd[1697]: time="2025-01-14T14:36:34.707901959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:34.717891 containerd[1697]: time="2025-01-14T14:36:34.717834151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 14 14:36:34.726608 containerd[1697]: time="2025-01-14T14:36:34.725973590Z" level=info msg="CreateContainer within sandbox \"118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 14 14:36:34.754168 systemd-networkd[1575]: calic423024cb94: Gained IPv6LL Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.446 [INFO][5067] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0 calico-kube-controllers-7bb44f7d4c- calico-system f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e 854 0 2025-01-14 14:36:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7bb44f7d4c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.3.0-a-0bb245c6fa calico-kube-controllers-7bb44f7d4c-kl6dd eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calia16e8aa82d8 [] []}} ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.447 [INFO][5067] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.558 [INFO][5087] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" HandleID="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.571 [INFO][5087] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" HandleID="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011ad30), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.3.0-a-0bb245c6fa", "pod":"calico-kube-controllers-7bb44f7d4c-kl6dd", "timestamp":"2025-01-14 14:36:34.558642972 +0000 UTC"}, Hostname:"ci-4081.3.0-a-0bb245c6fa", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.571 [INFO][5087] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.592 [INFO][5087] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.592 [INFO][5087] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.3.0-a-0bb245c6fa' Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.600 [INFO][5087] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.611 [INFO][5087] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.630 [INFO][5087] ipam/ipam.go 489: Trying affinity for 192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.637 [INFO][5087] ipam/ipam.go 155: Attempting to load block cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.642 [INFO][5087] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.34.64/26 host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.642 [INFO][5087] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.34.64/26 handle="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.645 [INFO][5087] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.654 [INFO][5087] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.34.64/26 handle="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.667 [INFO][5087] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.34.70/26] block=192.168.34.64/26 handle="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.667 [INFO][5087] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.34.70/26] handle="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" host="ci-4081.3.0-a-0bb245c6fa" Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.667 [INFO][5087] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:34.758499 containerd[1697]: 2025-01-14 14:36:34.667 [INFO][5087] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.34.70/26] IPv6=[] ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" HandleID="k8s-pod-network.ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.759857 containerd[1697]: 2025-01-14 14:36:34.675 [INFO][5067] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0", GenerateName:"calico-kube-controllers-7bb44f7d4c-", Namespace:"calico-system", SelfLink:"", UID:"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb44f7d4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"", Pod:"calico-kube-controllers-7bb44f7d4c-kl6dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia16e8aa82d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:34.759857 containerd[1697]: 2025-01-14 14:36:34.675 [INFO][5067] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.34.70/32] ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.759857 containerd[1697]: 2025-01-14 14:36:34.675 [INFO][5067] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia16e8aa82d8 ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.759857 containerd[1697]: 2025-01-14 14:36:34.695 [INFO][5067] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.759857 containerd[1697]: 2025-01-14 14:36:34.703 [INFO][5067] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0", GenerateName:"calico-kube-controllers-7bb44f7d4c-", Namespace:"calico-system", SelfLink:"", UID:"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb44f7d4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f", Pod:"calico-kube-controllers-7bb44f7d4c-kl6dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia16e8aa82d8", MAC:"62:21:3c:a9:de:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:34.759857 containerd[1697]: 2025-01-14 14:36:34.728 [INFO][5067] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f" Namespace="calico-system" Pod="calico-kube-controllers-7bb44f7d4c-kl6dd" WorkloadEndpoint="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:34.772698 systemd[1]: Started cri-containerd-c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d.scope - libcontainer container c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d. Jan 14 14:36:34.819310 containerd[1697]: time="2025-01-14T14:36:34.819249532Z" level=info msg="CreateContainer within sandbox \"118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"59ef889e6c7509ccd48927000520c172723f283b91da2b6151e79a35a718e614\"" Jan 14 14:36:34.825544 containerd[1697]: time="2025-01-14T14:36:34.824690092Z" level=info msg="StartContainer for \"59ef889e6c7509ccd48927000520c172723f283b91da2b6151e79a35a718e614\"" Jan 14 14:36:34.835995 containerd[1697]: time="2025-01-14T14:36:34.835877320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 14 14:36:34.836185 containerd[1697]: time="2025-01-14T14:36:34.836028125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 14 14:36:34.836185 containerd[1697]: time="2025-01-14T14:36:34.836064226Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:34.836295 containerd[1697]: time="2025-01-14T14:36:34.836179429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 14 14:36:34.883198 systemd[1]: Started cri-containerd-ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f.scope - libcontainer container ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f. Jan 14 14:36:34.893005 containerd[1697]: time="2025-01-14T14:36:34.892943098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-gs5qf,Uid:e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a,Namespace:kube-system,Attempt:1,} returns sandbox id \"c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d\"" Jan 14 14:36:34.900220 containerd[1697]: time="2025-01-14T14:36:34.899624494Z" level=info msg="CreateContainer within sandbox \"c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 14:36:34.920516 systemd[1]: Started cri-containerd-59ef889e6c7509ccd48927000520c172723f283b91da2b6151e79a35a718e614.scope - libcontainer container 59ef889e6c7509ccd48927000520c172723f283b91da2b6151e79a35a718e614. Jan 14 14:36:34.961765 containerd[1697]: time="2025-01-14T14:36:34.961708219Z" level=info msg="CreateContainer within sandbox \"c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1869be09772517637a8c9bbb697fb8034eda4a82cd941ca8ca65a1c5415c27b7\"" Jan 14 14:36:34.963524 containerd[1697]: time="2025-01-14T14:36:34.962704648Z" level=info msg="StartContainer for \"1869be09772517637a8c9bbb697fb8034eda4a82cd941ca8ca65a1c5415c27b7\"" Jan 14 14:36:34.999372 containerd[1697]: time="2025-01-14T14:36:34.999161420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7bb44f7d4c-kl6dd,Uid:f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e,Namespace:calico-system,Attempt:1,} returns sandbox id \"ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f\"" Jan 14 14:36:35.022697 systemd[1]: Started cri-containerd-1869be09772517637a8c9bbb697fb8034eda4a82cd941ca8ca65a1c5415c27b7.scope - libcontainer container 1869be09772517637a8c9bbb697fb8034eda4a82cd941ca8ca65a1c5415c27b7. Jan 14 14:36:35.025567 containerd[1697]: time="2025-01-14T14:36:35.024750972Z" level=info msg="StartContainer for \"59ef889e6c7509ccd48927000520c172723f283b91da2b6151e79a35a718e614\" returns successfully" Jan 14 14:36:35.063958 containerd[1697]: time="2025-01-14T14:36:35.063892023Z" level=info msg="StartContainer for \"1869be09772517637a8c9bbb697fb8034eda4a82cd941ca8ca65a1c5415c27b7\" returns successfully" Jan 14 14:36:35.297441 kubelet[3240]: I0114 14:36:35.297354 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77645db595-cvbcq" podStartSLOduration=31.994677507 podStartE2EDuration="35.297332984s" podCreationTimestamp="2025-01-14 14:36:00 +0000 UTC" firstStartedPulling="2025-01-14 14:36:31.413416622 +0000 UTC m=+55.507813387" lastFinishedPulling="2025-01-14 14:36:34.716072099 +0000 UTC m=+58.810468864" observedRunningTime="2025-01-14 14:36:35.29583254 +0000 UTC m=+59.390229305" watchObservedRunningTime="2025-01-14 14:36:35.297332984 +0000 UTC m=+59.391729849" Jan 14 14:36:35.317136 kubelet[3240]: I0114 14:36:35.317026 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-gs5qf" podStartSLOduration=42.317002462 podStartE2EDuration="42.317002462s" podCreationTimestamp="2025-01-14 14:35:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-14 14:36:35.314451287 +0000 UTC m=+59.408848152" watchObservedRunningTime="2025-01-14 14:36:35.317002462 +0000 UTC m=+59.411399327" Jan 14 14:36:35.778702 systemd-networkd[1575]: calia16e8aa82d8: Gained IPv6LL Jan 14 14:36:35.905651 systemd-networkd[1575]: calid750418ebec: Gained IPv6LL Jan 14 14:36:36.007489 containerd[1697]: time="2025-01-14T14:36:36.006947942Z" level=info msg="StopPodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\"" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.046 [WARNING][5306] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8634fa1-fd6c-4670-aa78-0572c049583e", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d", Pod:"calico-apiserver-77645db595-cvbcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54c3736ff01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.046 [INFO][5306] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.046 [INFO][5306] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" iface="eth0" netns="" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.046 [INFO][5306] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.046 [INFO][5306] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.067 [INFO][5312] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.068 [INFO][5312] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.068 [INFO][5312] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.075 [WARNING][5312] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.076 [INFO][5312] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.079 [INFO][5312] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.083588 containerd[1697]: 2025-01-14 14:36:36.081 [INFO][5306] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.084809 containerd[1697]: time="2025-01-14T14:36:36.083587495Z" level=info msg="TearDown network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" successfully" Jan 14 14:36:36.084809 containerd[1697]: time="2025-01-14T14:36:36.083618596Z" level=info msg="StopPodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" returns successfully" Jan 14 14:36:36.084809 containerd[1697]: time="2025-01-14T14:36:36.084040609Z" level=info msg="RemovePodSandbox for \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\"" Jan 14 14:36:36.084809 containerd[1697]: time="2025-01-14T14:36:36.084074310Z" level=info msg="Forcibly stopping sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\"" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.117 [WARNING][5330] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"e8634fa1-fd6c-4670-aa78-0572c049583e", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"118eec92e6c145e721ebaab5ecc5845ac338599414d855a3191d7484ae93db9d", Pod:"calico-apiserver-77645db595-cvbcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali54c3736ff01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.118 [INFO][5330] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.118 [INFO][5330] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" iface="eth0" netns="" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.118 [INFO][5330] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.118 [INFO][5330] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.138 [INFO][5336] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.138 [INFO][5336] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.138 [INFO][5336] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.143 [WARNING][5336] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.143 [INFO][5336] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" HandleID="k8s-pod-network.4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--cvbcq-eth0" Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.144 [INFO][5336] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.147182 containerd[1697]: 2025-01-14 14:36:36.145 [INFO][5330] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f" Jan 14 14:36:36.147995 containerd[1697]: time="2025-01-14T14:36:36.147212065Z" level=info msg="TearDown network for sandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" successfully" Jan 14 14:36:36.174753 containerd[1697]: time="2025-01-14T14:36:36.174387064Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 14:36:36.174753 containerd[1697]: time="2025-01-14T14:36:36.174511868Z" level=info msg="RemovePodSandbox \"4b21577d6fb4aa78e1e0679d27c35b26f103a4001bc58e6f279940107720ac1f\" returns successfully" Jan 14 14:36:36.175506 containerd[1697]: time="2025-01-14T14:36:36.175332792Z" level=info msg="StopPodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\"" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.233 [WARNING][5358] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c33bf41e-f146-4bd5-b602-c1e913049366", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e", Pod:"coredns-7db6d8ff4d-tphr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a4aaf61539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.233 [INFO][5358] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.233 [INFO][5358] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" iface="eth0" netns="" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.233 [INFO][5358] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.233 [INFO][5358] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.253 [INFO][5366] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.254 [INFO][5366] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.254 [INFO][5366] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.259 [WARNING][5366] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.259 [INFO][5366] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.261 [INFO][5366] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.263617 containerd[1697]: 2025-01-14 14:36:36.262 [INFO][5358] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.264383 containerd[1697]: time="2025-01-14T14:36:36.263678689Z" level=info msg="TearDown network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" successfully" Jan 14 14:36:36.264383 containerd[1697]: time="2025-01-14T14:36:36.263708190Z" level=info msg="StopPodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" returns successfully" Jan 14 14:36:36.264757 containerd[1697]: time="2025-01-14T14:36:36.264727720Z" level=info msg="RemovePodSandbox for \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\"" Jan 14 14:36:36.264869 containerd[1697]: time="2025-01-14T14:36:36.264791921Z" level=info msg="Forcibly stopping sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\"" Jan 14 14:36:36.291125 kubelet[3240]: I0114 14:36:36.291071 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.301 [WARNING][5384] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"c33bf41e-f146-4bd5-b602-c1e913049366", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"1c66ff34bdbc7cdb92601731015e345b5b104371219a3878798fd403734dd33e", Pod:"coredns-7db6d8ff4d-tphr5", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5a4aaf61539", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.301 [INFO][5384] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.301 [INFO][5384] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" iface="eth0" netns="" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.301 [INFO][5384] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.301 [INFO][5384] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.320 [INFO][5390] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.320 [INFO][5390] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.320 [INFO][5390] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.327 [WARNING][5390] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.327 [INFO][5390] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" HandleID="k8s-pod-network.4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--tphr5-eth0" Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.328 [INFO][5390] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.331052 containerd[1697]: 2025-01-14 14:36:36.329 [INFO][5384] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc" Jan 14 14:36:36.331919 containerd[1697]: time="2025-01-14T14:36:36.331107771Z" level=info msg="TearDown network for sandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" successfully" Jan 14 14:36:36.341500 containerd[1697]: time="2025-01-14T14:36:36.340684652Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 14:36:36.341500 containerd[1697]: time="2025-01-14T14:36:36.340765555Z" level=info msg="RemovePodSandbox \"4aa784a74754559c57f975b2a64ac89a0e331fc586fb3e01b9349a1734cbcbdc\" returns successfully" Jan 14 14:36:36.342524 containerd[1697]: time="2025-01-14T14:36:36.342102894Z" level=info msg="StopPodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\"" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.377 [WARNING][5408] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09a9e4f5-6d4c-44f9-814c-0e031fb006c1", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c", Pod:"csi-node-driver-jckhs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba7442e9fb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.377 [INFO][5408] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.377 [INFO][5408] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" iface="eth0" netns="" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.377 [INFO][5408] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.377 [INFO][5408] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.397 [INFO][5414] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.397 [INFO][5414] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.397 [INFO][5414] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.404 [WARNING][5414] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.404 [INFO][5414] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.406 [INFO][5414] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.409286 containerd[1697]: 2025-01-14 14:36:36.408 [INFO][5408] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.410400 containerd[1697]: time="2025-01-14T14:36:36.409341870Z" level=info msg="TearDown network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" successfully" Jan 14 14:36:36.410400 containerd[1697]: time="2025-01-14T14:36:36.409373871Z" level=info msg="StopPodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" returns successfully" Jan 14 14:36:36.410400 containerd[1697]: time="2025-01-14T14:36:36.410030891Z" level=info msg="RemovePodSandbox for \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\"" Jan 14 14:36:36.410400 containerd[1697]: time="2025-01-14T14:36:36.410065892Z" level=info msg="Forcibly stopping sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\"" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.459 [WARNING][5432] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"09a9e4f5-6d4c-44f9-814c-0e031fb006c1", ResourceVersion:"828", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c", Pod:"csi-node-driver-jckhs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.34.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliba7442e9fb6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.459 [INFO][5432] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.459 [INFO][5432] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" iface="eth0" netns="" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.459 [INFO][5432] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.459 [INFO][5432] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.484 [INFO][5438] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.485 [INFO][5438] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.485 [INFO][5438] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.490 [WARNING][5438] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.490 [INFO][5438] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" HandleID="k8s-pod-network.9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-csi--node--driver--jckhs-eth0" Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.491 [INFO][5438] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.493599 containerd[1697]: 2025-01-14 14:36:36.492 [INFO][5432] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff" Jan 14 14:36:36.493599 containerd[1697]: time="2025-01-14T14:36:36.493585047Z" level=info msg="TearDown network for sandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" successfully" Jan 14 14:36:36.530613 containerd[1697]: time="2025-01-14T14:36:36.530555833Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 14:36:36.530784 containerd[1697]: time="2025-01-14T14:36:36.530651936Z" level=info msg="RemovePodSandbox \"9e769414e34180d22f2d1b07319c39f54048fdec10932ef99e12ee8848800aff\" returns successfully" Jan 14 14:36:36.531492 containerd[1697]: time="2025-01-14T14:36:36.531375057Z" level=info msg="StopPodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\"" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.595 [WARNING][5460] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0", GenerateName:"calico-kube-controllers-7bb44f7d4c-", Namespace:"calico-system", SelfLink:"", UID:"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb44f7d4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f", Pod:"calico-kube-controllers-7bb44f7d4c-kl6dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia16e8aa82d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.595 [INFO][5460] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.595 [INFO][5460] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" iface="eth0" netns="" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.595 [INFO][5460] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.595 [INFO][5460] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.627 [INFO][5466] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.628 [INFO][5466] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.628 [INFO][5466] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.637 [WARNING][5466] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.637 [INFO][5466] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.639 [INFO][5466] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.645662 containerd[1697]: 2025-01-14 14:36:36.642 [INFO][5460] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.645662 containerd[1697]: time="2025-01-14T14:36:36.645592915Z" level=info msg="TearDown network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" successfully" Jan 14 14:36:36.645662 containerd[1697]: time="2025-01-14T14:36:36.645625716Z" level=info msg="StopPodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" returns successfully" Jan 14 14:36:36.647995 containerd[1697]: time="2025-01-14T14:36:36.646673346Z" level=info msg="RemovePodSandbox for \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\"" Jan 14 14:36:36.647995 containerd[1697]: time="2025-01-14T14:36:36.646712948Z" level=info msg="Forcibly stopping sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\"" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.708 [WARNING][5485] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0", GenerateName:"calico-kube-controllers-7bb44f7d4c-", Namespace:"calico-system", SelfLink:"", UID:"f1d6f4c1-bcbc-47e7-b8d4-c5e9b1c5d26e", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7bb44f7d4c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f", Pod:"calico-kube-controllers-7bb44f7d4c-kl6dd", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.34.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calia16e8aa82d8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.708 [INFO][5485] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.708 [INFO][5485] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" iface="eth0" netns="" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.708 [INFO][5485] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.708 [INFO][5485] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.740 [INFO][5492] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.741 [INFO][5492] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.741 [INFO][5492] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.749 [WARNING][5492] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.749 [INFO][5492] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" HandleID="k8s-pod-network.4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--kube--controllers--7bb44f7d4c--kl6dd-eth0" Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.751 [INFO][5492] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.753332 containerd[1697]: 2025-01-14 14:36:36.752 [INFO][5485] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7" Jan 14 14:36:36.754104 containerd[1697]: time="2025-01-14T14:36:36.753374783Z" level=info msg="TearDown network for sandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" successfully" Jan 14 14:36:36.783523 containerd[1697]: time="2025-01-14T14:36:36.783445067Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 14:36:36.783710 containerd[1697]: time="2025-01-14T14:36:36.783605571Z" level=info msg="RemovePodSandbox \"4724180cab1e8a63d4361b3aec875052ab8421e9e9f7b7f1792f44fc52d10fb7\" returns successfully" Jan 14 14:36:36.784231 containerd[1697]: time="2025-01-14T14:36:36.784194489Z" level=info msg="StopPodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\"" Jan 14 14:36:36.795721 containerd[1697]: time="2025-01-14T14:36:36.795655125Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 14 14:36:36.796455 containerd[1697]: time="2025-01-14T14:36:36.795812730Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:36.800823 containerd[1697]: time="2025-01-14T14:36:36.800783576Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:36.807245 containerd[1697]: time="2025-01-14T14:36:36.807196665Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:36.809011 containerd[1697]: time="2025-01-14T14:36:36.808774311Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.090893459s" Jan 14 14:36:36.809011 containerd[1697]: time="2025-01-14T14:36:36.808846513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 14 14:36:36.813169 containerd[1697]: time="2025-01-14T14:36:36.813137839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 14 14:36:36.813261 containerd[1697]: time="2025-01-14T14:36:36.813210241Z" level=info msg="CreateContainer within sandbox \"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.838 [WARNING][5510] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d", Pod:"coredns-7db6d8ff4d-gs5qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid750418ebec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.838 [INFO][5510] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.838 [INFO][5510] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" iface="eth0" netns="" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.838 [INFO][5510] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.839 [INFO][5510] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.858 [INFO][5516] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.858 [INFO][5516] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.858 [INFO][5516] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.865 [WARNING][5516] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.865 [INFO][5516] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.868 [INFO][5516] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:36.870627 containerd[1697]: 2025-01-14 14:36:36.869 [INFO][5510] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:36.871283 containerd[1697]: time="2025-01-14T14:36:36.870672831Z" level=info msg="TearDown network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" successfully" Jan 14 14:36:36.871283 containerd[1697]: time="2025-01-14T14:36:36.870705831Z" level=info msg="StopPodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" returns successfully" Jan 14 14:36:36.871535 containerd[1697]: time="2025-01-14T14:36:36.871421053Z" level=info msg="RemovePodSandbox for \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\"" Jan 14 14:36:36.871535 containerd[1697]: time="2025-01-14T14:36:36.871458954Z" level=info msg="Forcibly stopping sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\"" Jan 14 14:36:36.889176 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2106857347.mount: Deactivated successfully. Jan 14 14:36:36.894351 containerd[1697]: time="2025-01-14T14:36:36.894306925Z" level=info msg="CreateContainer within sandbox \"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"e7a521de6c5ac26da7d896b128fd914b672d215fe5c31ac9cb49d526e66e89a7\"" Jan 14 14:36:36.896273 containerd[1697]: time="2025-01-14T14:36:36.895259853Z" level=info msg="StartContainer for \"e7a521de6c5ac26da7d896b128fd914b672d215fe5c31ac9cb49d526e66e89a7\"" Jan 14 14:36:36.943642 systemd[1]: Started cri-containerd-e7a521de6c5ac26da7d896b128fd914b672d215fe5c31ac9cb49d526e66e89a7.scope - libcontainer container e7a521de6c5ac26da7d896b128fd914b672d215fe5c31ac9cb49d526e66e89a7. Jan 14 14:36:36.993569 containerd[1697]: time="2025-01-14T14:36:36.993518941Z" level=info msg="StartContainer for \"e7a521de6c5ac26da7d896b128fd914b672d215fe5c31ac9cb49d526e66e89a7\" returns successfully" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.950 [WARNING][5534] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"e83a3fe2-fe35-4285-8916-cdbbbbdf9a7a", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 35, 53, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"c7c8ad11b1c75647165036a1ab64fd30cd68e7a5f29885adaf4a357c198d000d", Pod:"coredns-7db6d8ff4d-gs5qf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.34.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid750418ebec", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.951 [INFO][5534] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.951 [INFO][5534] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" iface="eth0" netns="" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.951 [INFO][5534] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.951 [INFO][5534] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.984 [INFO][5565] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.984 [INFO][5565] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.984 [INFO][5565] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.995 [WARNING][5565] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:36.996 [INFO][5565] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" HandleID="k8s-pod-network.65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-coredns--7db6d8ff4d--gs5qf-eth0" Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:37.000 [INFO][5565] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:37.003293 containerd[1697]: 2025-01-14 14:36:37.002 [INFO][5534] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5" Jan 14 14:36:37.003949 containerd[1697]: time="2025-01-14T14:36:37.003305329Z" level=info msg="TearDown network for sandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" successfully" Jan 14 14:36:37.016611 containerd[1697]: time="2025-01-14T14:36:37.016557319Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 14:36:37.017240 containerd[1697]: time="2025-01-14T14:36:37.016635621Z" level=info msg="RemovePodSandbox \"65756e52bfab855c65e1a92d02e2b985f326d1ef67dba594843555af792fa6c5\" returns successfully" Jan 14 14:36:37.017240 containerd[1697]: time="2025-01-14T14:36:37.017202438Z" level=info msg="StopPodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\"" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.051 [WARNING][5593] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"82a33404-98b2-48ea-9d78-61b8e4c56093", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6", Pod:"calico-apiserver-77645db595-fpj2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic423024cb94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.051 [INFO][5593] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.051 [INFO][5593] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" iface="eth0" netns="" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.051 [INFO][5593] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.051 [INFO][5593] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.073 [INFO][5599] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.073 [INFO][5599] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.073 [INFO][5599] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.078 [WARNING][5599] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.078 [INFO][5599] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.079 [INFO][5599] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:37.081789 containerd[1697]: 2025-01-14 14:36:37.080 [INFO][5593] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.082582 containerd[1697]: time="2025-01-14T14:36:37.081832437Z" level=info msg="TearDown network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" successfully" Jan 14 14:36:37.082582 containerd[1697]: time="2025-01-14T14:36:37.081869538Z" level=info msg="StopPodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" returns successfully" Jan 14 14:36:37.082582 containerd[1697]: time="2025-01-14T14:36:37.082492957Z" level=info msg="RemovePodSandbox for \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\"" Jan 14 14:36:37.082582 containerd[1697]: time="2025-01-14T14:36:37.082533058Z" level=info msg="Forcibly stopping sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\"" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.116 [WARNING][5618] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0", GenerateName:"calico-apiserver-77645db595-", Namespace:"calico-apiserver", SelfLink:"", UID:"82a33404-98b2-48ea-9d78-61b8e4c56093", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.January, 14, 14, 36, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"77645db595", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.3.0-a-0bb245c6fa", ContainerID:"1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6", Pod:"calico-apiserver-77645db595-fpj2w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.34.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic423024cb94", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.116 [INFO][5618] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.116 [INFO][5618] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" iface="eth0" netns="" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.116 [INFO][5618] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.116 [INFO][5618] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.136 [INFO][5624] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.136 [INFO][5624] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.136 [INFO][5624] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.142 [WARNING][5624] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.142 [INFO][5624] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" HandleID="k8s-pod-network.3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Workload="ci--4081.3.0--a--0bb245c6fa-k8s-calico--apiserver--77645db595--fpj2w-eth0" Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.144 [INFO][5624] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 14 14:36:37.146541 containerd[1697]: 2025-01-14 14:36:37.145 [INFO][5618] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc" Jan 14 14:36:37.146541 containerd[1697]: time="2025-01-14T14:36:37.146400935Z" level=info msg="TearDown network for sandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" successfully" Jan 14 14:36:37.158171 containerd[1697]: time="2025-01-14T14:36:37.158053578Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 14 14:36:37.158171 containerd[1697]: time="2025-01-14T14:36:37.158149080Z" level=info msg="RemovePodSandbox \"3cfb48d95efa0922e663cc8320c1d517eb380033acf16eb6c3596481de6431dc\" returns successfully" Jan 14 14:36:37.200264 containerd[1697]: time="2025-01-14T14:36:37.200210517Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:37.203034 containerd[1697]: time="2025-01-14T14:36:37.202969598Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 14 14:36:37.205017 containerd[1697]: time="2025-01-14T14:36:37.204984957Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 391.805717ms" Jan 14 14:36:37.205017 containerd[1697]: time="2025-01-14T14:36:37.205021058Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 14 14:36:37.206458 containerd[1697]: time="2025-01-14T14:36:37.206233694Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 14 14:36:37.208138 containerd[1697]: time="2025-01-14T14:36:37.207971745Z" level=info msg="CreateContainer within sandbox \"1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 14 14:36:37.244780 containerd[1697]: time="2025-01-14T14:36:37.244579921Z" level=info msg="CreateContainer within sandbox \"1f02699c7291a05ce34967f72a0d9061728af7e2275f79ee2a633f35d91719b6\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"a388ab3e72336cb69313811ccb6aa2a459b7c4be5a66ec048e8df82fd9e85218\"" Jan 14 14:36:37.246638 containerd[1697]: time="2025-01-14T14:36:37.246594980Z" level=info msg="StartContainer for \"a388ab3e72336cb69313811ccb6aa2a459b7c4be5a66ec048e8df82fd9e85218\"" Jan 14 14:36:37.277655 systemd[1]: Started cri-containerd-a388ab3e72336cb69313811ccb6aa2a459b7c4be5a66ec048e8df82fd9e85218.scope - libcontainer container a388ab3e72336cb69313811ccb6aa2a459b7c4be5a66ec048e8df82fd9e85218. Jan 14 14:36:37.327949 containerd[1697]: time="2025-01-14T14:36:37.327880170Z" level=info msg="StartContainer for \"a388ab3e72336cb69313811ccb6aa2a459b7c4be5a66ec048e8df82fd9e85218\" returns successfully" Jan 14 14:36:38.388662 kubelet[3240]: I0114 14:36:38.387446 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-77645db595-fpj2w" podStartSLOduration=34.258082333 podStartE2EDuration="38.387419109s" podCreationTimestamp="2025-01-14 14:36:00 +0000 UTC" firstStartedPulling="2025-01-14 14:36:33.076666511 +0000 UTC m=+57.171063276" lastFinishedPulling="2025-01-14 14:36:37.206003287 +0000 UTC m=+61.300400052" observedRunningTime="2025-01-14 14:36:38.333269571 +0000 UTC m=+62.427666436" watchObservedRunningTime="2025-01-14 14:36:38.387419109 +0000 UTC m=+62.481815974" Jan 14 14:36:39.563041 containerd[1697]: time="2025-01-14T14:36:39.562980541Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:39.566997 containerd[1697]: time="2025-01-14T14:36:39.566936446Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 14 14:36:39.578113 containerd[1697]: time="2025-01-14T14:36:39.578033541Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:39.586299 containerd[1697]: time="2025-01-14T14:36:39.586229959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:39.587110 containerd[1697]: time="2025-01-14T14:36:39.586949878Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.380682182s" Jan 14 14:36:39.587110 containerd[1697]: time="2025-01-14T14:36:39.586995479Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 14 14:36:39.588450 containerd[1697]: time="2025-01-14T14:36:39.588281613Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 14 14:36:39.612769 containerd[1697]: time="2025-01-14T14:36:39.612718662Z" level=info msg="CreateContainer within sandbox \"ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 14 14:36:39.659331 containerd[1697]: time="2025-01-14T14:36:39.659274899Z" level=info msg="CreateContainer within sandbox \"ef6718317f9228ae0b2d8c669755c15169ad9b4d73c9afc5b23de07a612cc22f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71\"" Jan 14 14:36:39.660174 containerd[1697]: time="2025-01-14T14:36:39.660016119Z" level=info msg="StartContainer for \"3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71\"" Jan 14 14:36:39.697785 systemd[1]: Started cri-containerd-3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71.scope - libcontainer container 3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71. Jan 14 14:36:39.744368 containerd[1697]: time="2025-01-14T14:36:39.744194455Z" level=info msg="StartContainer for \"3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71\" returns successfully" Jan 14 14:36:40.329984 kubelet[3240]: I0114 14:36:40.329915 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7bb44f7d4c-kl6dd" podStartSLOduration=35.75192945 podStartE2EDuration="40.329889816s" podCreationTimestamp="2025-01-14 14:36:00 +0000 UTC" firstStartedPulling="2025-01-14 14:36:35.010196144 +0000 UTC m=+59.104592909" lastFinishedPulling="2025-01-14 14:36:39.58815651 +0000 UTC m=+63.682553275" observedRunningTime="2025-01-14 14:36:40.329504206 +0000 UTC m=+64.423900971" watchObservedRunningTime="2025-01-14 14:36:40.329889816 +0000 UTC m=+64.424286581" Jan 14 14:36:41.365002 systemd[1]: run-containerd-runc-k8s.io-3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71-runc.hitBG8.mount: Deactivated successfully. Jan 14 14:36:41.569590 containerd[1697]: time="2025-01-14T14:36:41.569535150Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:41.572953 containerd[1697]: time="2025-01-14T14:36:41.572883539Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 14 14:36:41.577702 containerd[1697]: time="2025-01-14T14:36:41.577645466Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:41.582985 containerd[1697]: time="2025-01-14T14:36:41.582911005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 14:36:41.585509 containerd[1697]: time="2025-01-14T14:36:41.584909559Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.996589144s" Jan 14 14:36:41.585509 containerd[1697]: time="2025-01-14T14:36:41.584953160Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 14 14:36:41.591931 containerd[1697]: time="2025-01-14T14:36:41.591895044Z" level=info msg="CreateContainer within sandbox \"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 14 14:36:41.634096 containerd[1697]: time="2025-01-14T14:36:41.633973262Z" level=info msg="CreateContainer within sandbox \"0dc0a53c2caed0b95b6b77d1814819db3db4c942c6db99f5e213c9561fb21f0c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"fefe820779f4b285067ebdda053e7088f4aea26bf66ec0e79b65b7c316951e52\"" Jan 14 14:36:41.636375 containerd[1697]: time="2025-01-14T14:36:41.634900087Z" level=info msg="StartContainer for \"fefe820779f4b285067ebdda053e7088f4aea26bf66ec0e79b65b7c316951e52\"" Jan 14 14:36:41.668686 systemd[1]: Started cri-containerd-fefe820779f4b285067ebdda053e7088f4aea26bf66ec0e79b65b7c316951e52.scope - libcontainer container fefe820779f4b285067ebdda053e7088f4aea26bf66ec0e79b65b7c316951e52. Jan 14 14:36:41.706551 containerd[1697]: time="2025-01-14T14:36:41.706377686Z" level=info msg="StartContainer for \"fefe820779f4b285067ebdda053e7088f4aea26bf66ec0e79b65b7c316951e52\" returns successfully" Jan 14 14:36:42.110746 kubelet[3240]: I0114 14:36:42.110709 3240 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 14 14:36:42.110746 kubelet[3240]: I0114 14:36:42.110749 3240 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 14 14:36:42.339666 kubelet[3240]: I0114 14:36:42.338754 3240 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jckhs" podStartSLOduration=33.480641464 podStartE2EDuration="42.338728986s" podCreationTimestamp="2025-01-14 14:36:00 +0000 UTC" firstStartedPulling="2025-01-14 14:36:32.729891418 +0000 UTC m=+56.824288183" lastFinishedPulling="2025-01-14 14:36:41.58797894 +0000 UTC m=+65.682375705" observedRunningTime="2025-01-14 14:36:42.338020067 +0000 UTC m=+66.432416932" watchObservedRunningTime="2025-01-14 14:36:42.338728986 +0000 UTC m=+66.433125751" Jan 14 14:37:13.651001 kubelet[3240]: I0114 14:37:13.650640 3240 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 14:37:19.764165 systemd[1]: run-containerd-runc-k8s.io-708b8476fc8c38628abcb4486ac66cb233d013da2601bbf58d5b40a7b76c8a7b-runc.iit1kQ.mount: Deactivated successfully. Jan 14 14:37:38.736595 systemd[1]: Started sshd@7-10.200.8.34:22-10.200.16.10:54532.service - OpenSSH per-connection server daemon (10.200.16.10:54532). Jan 14 14:37:39.381408 sshd[5896]: Accepted publickey for core from 10.200.16.10 port 54532 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:39.384742 sshd[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:39.393345 systemd-logind[1672]: New session 10 of user core. Jan 14 14:37:39.397690 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 14 14:37:39.900300 sshd[5896]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:39.910138 systemd[1]: sshd@7-10.200.8.34:22-10.200.16.10:54532.service: Deactivated successfully. Jan 14 14:37:39.915334 systemd[1]: session-10.scope: Deactivated successfully. Jan 14 14:37:39.916836 systemd-logind[1672]: Session 10 logged out. Waiting for processes to exit. Jan 14 14:37:39.918393 systemd-logind[1672]: Removed session 10. Jan 14 14:37:45.015676 systemd[1]: Started sshd@8-10.200.8.34:22-10.200.16.10:54544.service - OpenSSH per-connection server daemon (10.200.16.10:54544). Jan 14 14:37:45.663071 sshd[5928]: Accepted publickey for core from 10.200.16.10 port 54544 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:45.665716 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:45.671767 systemd-logind[1672]: New session 11 of user core. Jan 14 14:37:45.675627 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 14 14:37:46.187681 sshd[5928]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:46.190915 systemd[1]: sshd@8-10.200.8.34:22-10.200.16.10:54544.service: Deactivated successfully. Jan 14 14:37:46.193538 systemd[1]: session-11.scope: Deactivated successfully. Jan 14 14:37:46.195505 systemd-logind[1672]: Session 11 logged out. Waiting for processes to exit. Jan 14 14:37:46.196509 systemd-logind[1672]: Removed session 11. Jan 14 14:37:51.305761 systemd[1]: Started sshd@9-10.200.8.34:22-10.200.16.10:40572.service - OpenSSH per-connection server daemon (10.200.16.10:40572). Jan 14 14:37:51.941700 sshd[5987]: Accepted publickey for core from 10.200.16.10 port 40572 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:51.943280 sshd[5987]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:51.948163 systemd-logind[1672]: New session 12 of user core. Jan 14 14:37:51.951672 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 14 14:37:52.454360 sshd[5987]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:52.459061 systemd[1]: sshd@9-10.200.8.34:22-10.200.16.10:40572.service: Deactivated successfully. Jan 14 14:37:52.461639 systemd[1]: session-12.scope: Deactivated successfully. Jan 14 14:37:52.462451 systemd-logind[1672]: Session 12 logged out. Waiting for processes to exit. Jan 14 14:37:52.463597 systemd-logind[1672]: Removed session 12. Jan 14 14:37:57.574001 systemd[1]: Started sshd@10-10.200.8.34:22-10.200.16.10:39766.service - OpenSSH per-connection server daemon (10.200.16.10:39766). Jan 14 14:37:58.209983 sshd[6004]: Accepted publickey for core from 10.200.16.10 port 39766 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:58.211648 sshd[6004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:58.216404 systemd-logind[1672]: New session 13 of user core. Jan 14 14:37:58.220650 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 14 14:37:58.724859 sshd[6004]: pam_unix(sshd:session): session closed for user core Jan 14 14:37:58.728751 systemd[1]: sshd@10-10.200.8.34:22-10.200.16.10:39766.service: Deactivated successfully. Jan 14 14:37:58.731173 systemd[1]: session-13.scope: Deactivated successfully. Jan 14 14:37:58.733154 systemd-logind[1672]: Session 13 logged out. Waiting for processes to exit. Jan 14 14:37:58.734260 systemd-logind[1672]: Removed session 13. Jan 14 14:37:58.841865 systemd[1]: Started sshd@11-10.200.8.34:22-10.200.16.10:39768.service - OpenSSH per-connection server daemon (10.200.16.10:39768). Jan 14 14:37:59.477427 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 39768 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:37:59.479011 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:37:59.483867 systemd-logind[1672]: New session 14 of user core. Jan 14 14:37:59.489636 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 14 14:38:00.028302 sshd[6017]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:00.031754 systemd[1]: sshd@11-10.200.8.34:22-10.200.16.10:39768.service: Deactivated successfully. Jan 14 14:38:00.034265 systemd[1]: session-14.scope: Deactivated successfully. Jan 14 14:38:00.035995 systemd-logind[1672]: Session 14 logged out. Waiting for processes to exit. Jan 14 14:38:00.037199 systemd-logind[1672]: Removed session 14. Jan 14 14:38:00.144796 systemd[1]: Started sshd@12-10.200.8.34:22-10.200.16.10:39776.service - OpenSSH per-connection server daemon (10.200.16.10:39776). Jan 14 14:38:00.781596 sshd[6028]: Accepted publickey for core from 10.200.16.10 port 39776 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:00.783222 sshd[6028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:00.787562 systemd-logind[1672]: New session 15 of user core. Jan 14 14:38:00.793649 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 14 14:38:01.292195 sshd[6028]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:01.295754 systemd[1]: sshd@12-10.200.8.34:22-10.200.16.10:39776.service: Deactivated successfully. Jan 14 14:38:01.298295 systemd[1]: session-15.scope: Deactivated successfully. Jan 14 14:38:01.300376 systemd-logind[1672]: Session 15 logged out. Waiting for processes to exit. Jan 14 14:38:01.301659 systemd-logind[1672]: Removed session 15. Jan 14 14:38:06.414785 systemd[1]: Started sshd@13-10.200.8.34:22-10.200.16.10:52776.service - OpenSSH per-connection server daemon (10.200.16.10:52776). Jan 14 14:38:07.049565 sshd[6061]: Accepted publickey for core from 10.200.16.10 port 52776 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:07.051308 sshd[6061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:07.056704 systemd-logind[1672]: New session 16 of user core. Jan 14 14:38:07.059659 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 14 14:38:07.559392 sshd[6061]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:07.562697 systemd[1]: sshd@13-10.200.8.34:22-10.200.16.10:52776.service: Deactivated successfully. Jan 14 14:38:07.565053 systemd[1]: session-16.scope: Deactivated successfully. Jan 14 14:38:07.566809 systemd-logind[1672]: Session 16 logged out. Waiting for processes to exit. Jan 14 14:38:07.567821 systemd-logind[1672]: Removed session 16. Jan 14 14:38:12.682779 systemd[1]: Started sshd@14-10.200.8.34:22-10.200.16.10:52778.service - OpenSSH per-connection server daemon (10.200.16.10:52778). Jan 14 14:38:13.319527 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 52778 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:13.321234 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:13.325523 systemd-logind[1672]: New session 17 of user core. Jan 14 14:38:13.334653 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 14 14:38:13.833724 sshd[6073]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:13.837322 systemd[1]: sshd@14-10.200.8.34:22-10.200.16.10:52778.service: Deactivated successfully. Jan 14 14:38:13.840087 systemd[1]: session-17.scope: Deactivated successfully. Jan 14 14:38:13.841841 systemd-logind[1672]: Session 17 logged out. Waiting for processes to exit. Jan 14 14:38:13.843000 systemd-logind[1672]: Removed session 17. Jan 14 14:38:18.954827 systemd[1]: Started sshd@15-10.200.8.34:22-10.200.16.10:42000.service - OpenSSH per-connection server daemon (10.200.16.10:42000). Jan 14 14:38:19.599779 sshd[6106]: Accepted publickey for core from 10.200.16.10 port 42000 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:19.600413 sshd[6106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:19.608204 systemd-logind[1672]: New session 18 of user core. Jan 14 14:38:19.612660 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 14 14:38:20.117589 sshd[6106]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:20.124953 systemd[1]: sshd@15-10.200.8.34:22-10.200.16.10:42000.service: Deactivated successfully. Jan 14 14:38:20.128109 systemd[1]: session-18.scope: Deactivated successfully. Jan 14 14:38:20.131652 systemd-logind[1672]: Session 18 logged out. Waiting for processes to exit. Jan 14 14:38:20.133486 systemd-logind[1672]: Removed session 18. Jan 14 14:38:25.232787 systemd[1]: Started sshd@16-10.200.8.34:22-10.200.16.10:42016.service - OpenSSH per-connection server daemon (10.200.16.10:42016). Jan 14 14:38:25.870875 sshd[6143]: Accepted publickey for core from 10.200.16.10 port 42016 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:25.872598 sshd[6143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:25.877374 systemd-logind[1672]: New session 19 of user core. Jan 14 14:38:25.885662 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 14 14:38:26.384801 sshd[6143]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:26.388686 systemd[1]: sshd@16-10.200.8.34:22-10.200.16.10:42016.service: Deactivated successfully. Jan 14 14:38:26.391710 systemd[1]: session-19.scope: Deactivated successfully. Jan 14 14:38:26.394127 systemd-logind[1672]: Session 19 logged out. Waiting for processes to exit. Jan 14 14:38:26.395317 systemd-logind[1672]: Removed session 19. Jan 14 14:38:26.507571 systemd[1]: Started sshd@17-10.200.8.34:22-10.200.16.10:40144.service - OpenSSH per-connection server daemon (10.200.16.10:40144). Jan 14 14:38:27.148158 sshd[6157]: Accepted publickey for core from 10.200.16.10 port 40144 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:27.149953 sshd[6157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:27.154970 systemd-logind[1672]: New session 20 of user core. Jan 14 14:38:27.164657 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 14 14:38:27.716573 sshd[6157]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:27.720704 systemd[1]: sshd@17-10.200.8.34:22-10.200.16.10:40144.service: Deactivated successfully. Jan 14 14:38:27.722953 systemd[1]: session-20.scope: Deactivated successfully. Jan 14 14:38:27.723904 systemd-logind[1672]: Session 20 logged out. Waiting for processes to exit. Jan 14 14:38:27.724933 systemd-logind[1672]: Removed session 20. Jan 14 14:38:27.838998 systemd[1]: Started sshd@18-10.200.8.34:22-10.200.16.10:40152.service - OpenSSH per-connection server daemon (10.200.16.10:40152). Jan 14 14:38:28.487164 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 40152 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:28.488983 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:28.494346 systemd-logind[1672]: New session 21 of user core. Jan 14 14:38:28.499667 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 14 14:38:30.805136 sshd[6168]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:30.809395 systemd[1]: sshd@18-10.200.8.34:22-10.200.16.10:40152.service: Deactivated successfully. Jan 14 14:38:30.811992 systemd[1]: session-21.scope: Deactivated successfully. Jan 14 14:38:30.812932 systemd-logind[1672]: Session 21 logged out. Waiting for processes to exit. Jan 14 14:38:30.813986 systemd-logind[1672]: Removed session 21. Jan 14 14:38:30.920862 systemd[1]: Started sshd@19-10.200.8.34:22-10.200.16.10:40154.service - OpenSSH per-connection server daemon (10.200.16.10:40154). Jan 14 14:38:31.563481 sshd[6186]: Accepted publickey for core from 10.200.16.10 port 40154 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:31.565596 sshd[6186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:31.571548 systemd-logind[1672]: New session 22 of user core. Jan 14 14:38:31.576618 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 14 14:38:32.190134 sshd[6186]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:32.195033 systemd[1]: sshd@19-10.200.8.34:22-10.200.16.10:40154.service: Deactivated successfully. Jan 14 14:38:32.197278 systemd[1]: session-22.scope: Deactivated successfully. Jan 14 14:38:32.198124 systemd-logind[1672]: Session 22 logged out. Waiting for processes to exit. Jan 14 14:38:32.199172 systemd-logind[1672]: Removed session 22. Jan 14 14:38:32.310142 systemd[1]: Started sshd@20-10.200.8.34:22-10.200.16.10:40158.service - OpenSSH per-connection server daemon (10.200.16.10:40158). Jan 14 14:38:32.945974 sshd[6197]: Accepted publickey for core from 10.200.16.10 port 40158 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:32.947566 sshd[6197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:32.952704 systemd-logind[1672]: New session 23 of user core. Jan 14 14:38:32.958636 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 14 14:38:33.454318 sshd[6197]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:33.457504 systemd[1]: sshd@20-10.200.8.34:22-10.200.16.10:40158.service: Deactivated successfully. Jan 14 14:38:33.459804 systemd[1]: session-23.scope: Deactivated successfully. Jan 14 14:38:33.461383 systemd-logind[1672]: Session 23 logged out. Waiting for processes to exit. Jan 14 14:38:33.462941 systemd-logind[1672]: Removed session 23. Jan 14 14:38:38.569817 systemd[1]: Started sshd@21-10.200.8.34:22-10.200.16.10:51084.service - OpenSSH per-connection server daemon (10.200.16.10:51084). Jan 14 14:38:39.213892 sshd[6212]: Accepted publickey for core from 10.200.16.10 port 51084 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:39.215567 sshd[6212]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:39.220625 systemd-logind[1672]: New session 24 of user core. Jan 14 14:38:39.225665 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 14 14:38:39.775343 sshd[6212]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:39.781428 systemd[1]: sshd@21-10.200.8.34:22-10.200.16.10:51084.service: Deactivated successfully. Jan 14 14:38:39.785695 systemd[1]: session-24.scope: Deactivated successfully. Jan 14 14:38:39.787302 systemd-logind[1672]: Session 24 logged out. Waiting for processes to exit. Jan 14 14:38:39.788649 systemd-logind[1672]: Removed session 24. Jan 14 14:38:44.893574 systemd[1]: Started sshd@22-10.200.8.34:22-10.200.16.10:51086.service - OpenSSH per-connection server daemon (10.200.16.10:51086). Jan 14 14:38:45.539729 sshd[6244]: Accepted publickey for core from 10.200.16.10 port 51086 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:45.541314 sshd[6244]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:45.545508 systemd-logind[1672]: New session 25 of user core. Jan 14 14:38:45.549668 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 14 14:38:46.045619 sshd[6244]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:46.049361 systemd[1]: sshd@22-10.200.8.34:22-10.200.16.10:51086.service: Deactivated successfully. Jan 14 14:38:46.052267 systemd[1]: session-25.scope: Deactivated successfully. Jan 14 14:38:46.054494 systemd-logind[1672]: Session 25 logged out. Waiting for processes to exit. Jan 14 14:38:46.055599 systemd-logind[1672]: Removed session 25. Jan 14 14:38:47.081660 systemd[1]: run-containerd-runc-k8s.io-3f12e974cd3c78b33afb60e04f214ae1466c7769f647b83e4b79c3cfbc247c71-runc.DVT42J.mount: Deactivated successfully. Jan 14 14:38:51.167755 systemd[1]: Started sshd@23-10.200.8.34:22-10.200.16.10:49590.service - OpenSSH per-connection server daemon (10.200.16.10:49590). Jan 14 14:38:51.802453 sshd[6299]: Accepted publickey for core from 10.200.16.10 port 49590 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:51.804180 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:51.808910 systemd-logind[1672]: New session 26 of user core. Jan 14 14:38:51.815656 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 14 14:38:52.317102 sshd[6299]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:52.322168 systemd[1]: sshd@23-10.200.8.34:22-10.200.16.10:49590.service: Deactivated successfully. Jan 14 14:38:52.325034 systemd[1]: session-26.scope: Deactivated successfully. Jan 14 14:38:52.326016 systemd-logind[1672]: Session 26 logged out. Waiting for processes to exit. Jan 14 14:38:52.327212 systemd-logind[1672]: Removed session 26. Jan 14 14:38:57.435761 systemd[1]: Started sshd@24-10.200.8.34:22-10.200.16.10:40318.service - OpenSSH per-connection server daemon (10.200.16.10:40318). Jan 14 14:38:58.069412 sshd[6315]: Accepted publickey for core from 10.200.16.10 port 40318 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:38:58.071079 sshd[6315]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:38:58.075521 systemd-logind[1672]: New session 27 of user core. Jan 14 14:38:58.079677 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 14 14:38:58.576125 sshd[6315]: pam_unix(sshd:session): session closed for user core Jan 14 14:38:58.581099 systemd[1]: sshd@24-10.200.8.34:22-10.200.16.10:40318.service: Deactivated successfully. Jan 14 14:38:58.583975 systemd[1]: session-27.scope: Deactivated successfully. Jan 14 14:38:58.585061 systemd-logind[1672]: Session 27 logged out. Waiting for processes to exit. Jan 14 14:38:58.586137 systemd-logind[1672]: Removed session 27. Jan 14 14:39:03.689912 systemd[1]: Started sshd@25-10.200.8.34:22-10.200.16.10:40328.service - OpenSSH per-connection server daemon (10.200.16.10:40328). Jan 14 14:39:04.335305 sshd[6330]: Accepted publickey for core from 10.200.16.10 port 40328 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:04.337132 sshd[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:04.342856 systemd-logind[1672]: New session 28 of user core. Jan 14 14:39:04.348641 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 14 14:39:04.845326 sshd[6330]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:04.849319 systemd[1]: sshd@25-10.200.8.34:22-10.200.16.10:40328.service: Deactivated successfully. Jan 14 14:39:04.851909 systemd[1]: session-28.scope: Deactivated successfully. Jan 14 14:39:04.852975 systemd-logind[1672]: Session 28 logged out. Waiting for processes to exit. Jan 14 14:39:04.854031 systemd-logind[1672]: Removed session 28. Jan 14 14:39:09.962790 systemd[1]: Started sshd@26-10.200.8.34:22-10.200.16.10:42414.service - OpenSSH per-connection server daemon (10.200.16.10:42414). Jan 14 14:39:10.598940 sshd[6342]: Accepted publickey for core from 10.200.16.10 port 42414 ssh2: RSA SHA256:a+2S8eoWQbQVO3JLXgiieN5lEiWvIBRg13en2/CE8M8 Jan 14 14:39:10.600831 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 14:39:10.605175 systemd-logind[1672]: New session 29 of user core. Jan 14 14:39:10.610640 systemd[1]: Started session-29.scope - Session 29 of User core. Jan 14 14:39:11.110368 sshd[6342]: pam_unix(sshd:session): session closed for user core Jan 14 14:39:11.113474 systemd[1]: sshd@26-10.200.8.34:22-10.200.16.10:42414.service: Deactivated successfully. Jan 14 14:39:11.116040 systemd[1]: session-29.scope: Deactivated successfully. Jan 14 14:39:11.118326 systemd-logind[1672]: Session 29 logged out. Waiting for processes to exit. Jan 14 14:39:11.119550 systemd-logind[1672]: Removed session 29.