Dec 13 01:03:28.144256 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:03:28.144307 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:03:28.144324 kernel: BIOS-provided physical RAM map: Dec 13 01:03:28.144335 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:03:28.144346 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 01:03:28.144357 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 01:03:28.144371 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Dec 13 01:03:28.144387 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Dec 13 01:03:28.144399 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 01:03:28.144410 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 01:03:28.144443 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 01:03:28.144455 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 01:03:28.144466 kernel: printk: bootconsole [earlyser0] enabled Dec 13 01:03:28.144478 kernel: NX (Execute Disable) protection: active Dec 13 01:03:28.144498 kernel: APIC: Static calls initialized Dec 13 01:03:28.144511 kernel: efi: EFI v2.7 by Microsoft Dec 13 01:03:28.144524 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee75a98 Dec 13 01:03:28.144537 kernel: SMBIOS 3.1.0 present. Dec 13 01:03:28.144550 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 01:03:28.144563 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 01:03:28.144576 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 01:03:28.144590 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Dec 13 01:03:28.144603 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 01:03:28.144615 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 01:03:28.144633 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 01:03:28.144646 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:03:28.144659 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:03:28.144673 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 01:03:28.144687 kernel: tsc: Detected 2593.906 MHz processor Dec 13 01:03:28.144700 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:03:28.144714 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:03:28.144727 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 01:03:28.144741 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:03:28.144757 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:03:28.144770 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 01:03:28.144783 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 01:03:28.144797 kernel: Using GB pages for direct mapping Dec 13 01:03:28.144810 kernel: Secure boot disabled Dec 13 01:03:28.144823 kernel: ACPI: Early table checksum verification disabled Dec 13 01:03:28.144837 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 01:03:28.144857 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144876 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144890 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 01:03:28.144905 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 01:03:28.144919 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144933 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144947 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144965 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144979 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144993 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.145008 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.145023 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 01:03:28.145037 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 01:03:28.145051 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 01:03:28.145065 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 01:03:28.145083 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 01:03:28.145097 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 01:03:28.145111 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 01:03:28.145126 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 01:03:28.145141 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 01:03:28.145155 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 01:03:28.145169 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:03:28.145184 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:03:28.145199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 01:03:28.145216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 01:03:28.145229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 01:03:28.145244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 01:03:28.145258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 01:03:28.145272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 01:03:28.145287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 01:03:28.145301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 01:03:28.145316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 01:03:28.145331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 01:03:28.145349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 01:03:28.145363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 01:03:28.145378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 01:03:28.145392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 01:03:28.145406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 01:03:28.145477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 01:03:28.145492 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 01:03:28.145504 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 01:03:28.145517 kernel: Zone ranges: Dec 13 01:03:28.145535 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:03:28.145547 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:03:28.145560 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:03:28.145573 kernel: Movable zone start for each node Dec 13 01:03:28.145586 kernel: Early memory node ranges Dec 13 01:03:28.145599 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:03:28.145613 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 01:03:28.145625 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 01:03:28.145638 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:03:28.145656 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 01:03:28.145669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:03:28.145682 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:03:28.145695 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 01:03:28.145711 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 01:03:28.145733 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 01:03:28.145746 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:03:28.145759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:03:28.145772 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:03:28.145788 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 01:03:28.145800 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:03:28.145811 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 01:03:28.145824 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 01:03:28.145836 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:03:28.145849 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:03:28.145860 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:03:28.145873 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:03:28.145887 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:03:28.145903 kernel: Hyper-V: PV spinlocks enabled Dec 13 01:03:28.145915 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:03:28.145929 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:03:28.145941 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:03:28.145952 kernel: random: crng init done Dec 13 01:03:28.145964 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:03:28.145977 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:03:28.145990 kernel: Fallback order for Node 0: 0 Dec 13 01:03:28.146008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 01:03:28.146031 kernel: Policy zone: Normal Dec 13 01:03:28.146050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:03:28.146065 kernel: software IO TLB: area num 2. Dec 13 01:03:28.146078 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Dec 13 01:03:28.146092 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:03:28.146107 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:03:28.146120 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:03:28.146134 kernel: Dynamic Preempt: voluntary Dec 13 01:03:28.146148 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:03:28.146164 kernel: rcu: RCU event tracing is enabled. Dec 13 01:03:28.146185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:03:28.146197 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:03:28.146210 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:03:28.146223 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:03:28.146238 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:03:28.146258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:03:28.146273 kernel: Using NULL legacy PIC Dec 13 01:03:28.146287 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 01:03:28.146303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:03:28.146317 kernel: Console: colour dummy device 80x25 Dec 13 01:03:28.146330 kernel: printk: console [tty1] enabled Dec 13 01:03:28.146344 kernel: printk: console [ttyS0] enabled Dec 13 01:03:28.146359 kernel: printk: bootconsole [earlyser0] disabled Dec 13 01:03:28.146371 kernel: ACPI: Core revision 20230628 Dec 13 01:03:28.146386 kernel: Failed to register legacy timer interrupt Dec 13 01:03:28.146406 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:03:28.150735 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:03:28.150751 kernel: Hyper-V: Using IPI hypercalls Dec 13 01:03:28.150762 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 13 01:03:28.150771 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 13 01:03:28.150783 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 13 01:03:28.150792 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 13 01:03:28.150804 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 13 01:03:28.150812 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 13 01:03:28.150833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Dec 13 01:03:28.150842 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:03:28.150853 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:03:28.150862 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:03:28.150873 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:03:28.150881 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:03:28.150891 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:03:28.150901 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:03:28.150912 kernel: RETBleed: Vulnerable Dec 13 01:03:28.150925 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:03:28.150935 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:03:28.150947 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:03:28.150955 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:03:28.150965 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:03:28.150977 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:03:28.150985 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:03:28.150995 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:03:28.151005 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:03:28.151013 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:03:28.151024 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:03:28.151035 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 01:03:28.151046 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 01:03:28.151055 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 01:03:28.151063 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 01:03:28.151074 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:03:28.151083 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:03:28.151094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:03:28.151102 kernel: landlock: Up and running. Dec 13 01:03:28.151111 kernel: SELinux: Initializing. Dec 13 01:03:28.151122 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151130 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151141 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:03:28.151153 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:03:28.151165 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:03:28.151174 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:03:28.151182 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:03:28.151193 kernel: signal: max sigframe size: 3632 Dec 13 01:03:28.151201 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:03:28.151214 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:03:28.151222 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:03:28.151233 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:03:28.151245 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:03:28.151254 kernel: .... node #0, CPUs: #1 Dec 13 01:03:28.151265 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 01:03:28.151275 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:03:28.151286 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:03:28.151295 kernel: smpboot: Max logical packages: 1 Dec 13 01:03:28.151305 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 01:03:28.151314 kernel: devtmpfs: initialized Dec 13 01:03:28.151327 kernel: x86/mm: Memory block size: 128MB Dec 13 01:03:28.151338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 01:03:28.151349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:03:28.151358 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:03:28.151369 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:03:28.151379 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:03:28.151389 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:03:28.151400 kernel: audit: type=2000 audit(1734051806.028:1): state=initialized audit_enabled=0 res=1 Dec 13 01:03:28.151411 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:03:28.151451 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:03:28.151463 kernel: cpuidle: using governor menu Dec 13 01:03:28.151471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:03:28.151482 kernel: dca service started, version 1.12.1 Dec 13 01:03:28.151491 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Dec 13 01:03:28.151500 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:03:28.151511 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:03:28.151520 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:03:28.151531 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:03:28.151542 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:03:28.151554 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:03:28.151563 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:03:28.151571 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:03:28.151582 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:03:28.151590 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:03:28.151601 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:03:28.151610 kernel: ACPI: Interpreter enabled Dec 13 01:03:28.151618 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:03:28.151632 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:03:28.151641 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:03:28.151652 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:03:28.151661 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 01:03:28.151670 kernel: iommu: Default domain type: Translated Dec 13 01:03:28.151681 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:03:28.151689 kernel: efivars: Registered efivars operations Dec 13 01:03:28.151700 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:03:28.151709 kernel: PCI: System does not support PCI Dec 13 01:03:28.151721 kernel: vgaarb: loaded Dec 13 01:03:28.151731 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 01:03:28.151739 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:03:28.151751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:03:28.151759 kernel: pnp: PnP ACPI init Dec 13 01:03:28.151770 kernel: pnp: PnP ACPI: found 3 devices Dec 13 01:03:28.151779 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:03:28.151791 kernel: NET: Registered PF_INET protocol family Dec 13 01:03:28.151800 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:03:28.151814 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:03:28.151826 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:03:28.151835 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:03:28.151845 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:03:28.151856 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:03:28.151865 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151876 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:03:28.151894 kernel: NET: Registered PF_XDP protocol family Dec 13 01:03:28.151907 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:03:28.151915 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:03:28.151927 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Dec 13 01:03:28.151935 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:03:28.151946 kernel: Initialise system trusted keyrings Dec 13 01:03:28.151955 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:03:28.151964 kernel: Key type asymmetric registered Dec 13 01:03:28.151974 kernel: Asymmetric key parser 'x509' registered Dec 13 01:03:28.151982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:03:28.151997 kernel: io scheduler mq-deadline registered Dec 13 01:03:28.152005 kernel: io scheduler kyber registered Dec 13 01:03:28.152015 kernel: io scheduler bfq registered Dec 13 01:03:28.152025 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:03:28.152033 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:03:28.152044 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:03:28.152053 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:03:28.152064 kernel: i8042: PNP: No PS/2 controller found. Dec 13 01:03:28.152279 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 01:03:28.152437 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:03:27 UTC (1734051807) Dec 13 01:03:28.152566 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 01:03:28.152586 kernel: intel_pstate: CPU model not supported Dec 13 01:03:28.152601 kernel: efifb: probing for efifb Dec 13 01:03:28.152618 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:03:28.152634 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:03:28.152650 kernel: efifb: scrolling: redraw Dec 13 01:03:28.152670 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:03:28.152686 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:03:28.152702 kernel: fb0: EFI VGA frame buffer device Dec 13 01:03:28.152717 kernel: pstore: Using crash dump compression: deflate Dec 13 01:03:28.152734 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:03:28.152750 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:03:28.152765 kernel: Segment Routing with IPv6 Dec 13 01:03:28.152780 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:03:28.152796 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:03:28.152812 kernel: Key type dns_resolver registered Dec 13 01:03:28.152831 kernel: IPI shorthand broadcast: enabled Dec 13 01:03:28.152847 kernel: sched_clock: Marking stable (999003400, 61079000)->(1380437500, -320355100) Dec 13 01:03:28.152862 kernel: registered taskstats version 1 Dec 13 01:03:28.152878 kernel: Loading compiled-in X.509 certificates Dec 13 01:03:28.152894 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:03:28.152909 kernel: Key type .fscrypt registered Dec 13 01:03:28.152925 kernel: Key type fscrypt-provisioning registered Dec 13 01:03:28.152941 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:03:28.152960 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:03:28.152976 kernel: ima: No architecture policies found Dec 13 01:03:28.152991 kernel: clk: Disabling unused clocks Dec 13 01:03:28.153011 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:03:28.153027 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:03:28.153042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:03:28.153058 kernel: Run /init as init process Dec 13 01:03:28.153074 kernel: with arguments: Dec 13 01:03:28.153089 kernel: /init Dec 13 01:03:28.153107 kernel: with environment: Dec 13 01:03:28.153123 kernel: HOME=/ Dec 13 01:03:28.153138 kernel: TERM=linux Dec 13 01:03:28.153154 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:03:28.153172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:03:28.153192 systemd[1]: Detected virtualization microsoft. Dec 13 01:03:28.153208 systemd[1]: Detected architecture x86-64. Dec 13 01:03:28.153222 systemd[1]: Running in initrd. Dec 13 01:03:28.153242 systemd[1]: No hostname configured, using default hostname. Dec 13 01:03:28.153257 systemd[1]: Hostname set to . Dec 13 01:03:28.153274 systemd[1]: Initializing machine ID from random generator. Dec 13 01:03:28.153289 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:03:28.153305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:03:28.153322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:03:28.153340 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:03:28.153356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:03:28.153375 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:03:28.153392 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:03:28.153411 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:03:28.153455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:03:28.153471 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:03:28.153488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:03:28.153505 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:03:28.153525 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:03:28.153541 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:03:28.153557 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:03:28.153574 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:03:28.153591 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:03:28.153607 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:03:28.153624 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:03:28.153640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:03:28.153656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:03:28.153677 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:03:28.153693 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:03:28.153709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:03:28.153725 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:03:28.153742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:03:28.153758 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:03:28.153774 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:03:28.153791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:03:28.153811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:28.153864 systemd-journald[176]: Collecting audit messages is disabled. Dec 13 01:03:28.153896 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:03:28.153909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:03:28.153929 systemd-journald[176]: Journal started Dec 13 01:03:28.153982 systemd-journald[176]: Runtime Journal (/run/log/journal/17930e43c7a845cc9e7a45da3fadd5d0) is 8.0M, max 158.8M, 150.8M free. Dec 13 01:03:28.158481 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:03:28.159372 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:03:28.170798 systemd-modules-load[177]: Inserted module 'overlay' Dec 13 01:03:28.173301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:03:28.174634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:03:28.181797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:28.206782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:03:28.210617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:03:28.236730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:03:28.241492 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:03:28.250615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:03:28.257117 kernel: Bridge firewalling registered Dec 13 01:03:28.262522 systemd-modules-load[177]: Inserted module 'br_netfilter' Dec 13 01:03:28.264910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:28.272236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:03:28.275453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:03:28.290813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:03:28.299632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:03:28.307745 dracut-cmdline[207]: dracut-dracut-053 Dec 13 01:03:28.307745 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:03:28.341936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:03:28.356844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:03:28.400083 systemd-resolved[265]: Positive Trust Anchors: Dec 13 01:03:28.400101 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:03:28.400158 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:03:28.427343 systemd-resolved[265]: Defaulting to hostname 'linux'. Dec 13 01:03:28.431437 kernel: SCSI subsystem initialized Dec 13 01:03:28.432660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:03:28.438983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:03:28.450436 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:03:28.462454 kernel: iscsi: registered transport (tcp) Dec 13 01:03:28.487481 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:03:28.487610 kernel: QLogic iSCSI HBA Driver Dec 13 01:03:28.526134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:03:28.535765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:03:28.570879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:03:28.571002 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:03:28.574383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:03:28.620457 kernel: raid6: avx512x4 gen() 17354 MB/s Dec 13 01:03:28.639442 kernel: raid6: avx512x2 gen() 27559 MB/s Dec 13 01:03:28.658429 kernel: raid6: avx512x1 gen() 27745 MB/s Dec 13 01:03:28.678430 kernel: raid6: avx2x4 gen() 24621 MB/s Dec 13 01:03:28.697426 kernel: raid6: avx2x2 gen() 24635 MB/s Dec 13 01:03:28.718221 kernel: raid6: avx2x1 gen() 21673 MB/s Dec 13 01:03:28.718261 kernel: raid6: using algorithm avx512x1 gen() 27745 MB/s Dec 13 01:03:28.740398 kernel: raid6: .... xor() 25994 MB/s, rmw enabled Dec 13 01:03:28.740448 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:03:28.763446 kernel: xor: automatically using best checksumming function avx Dec 13 01:03:28.912448 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:03:28.922776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:03:28.931772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:03:28.944474 systemd-udevd[397]: Using default interface naming scheme 'v255'. Dec 13 01:03:28.949020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:03:28.963622 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:03:28.978118 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 01:03:29.010922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:03:29.018854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:03:29.063085 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:03:29.080652 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:03:29.109978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:03:29.119294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:03:29.123144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:03:29.129762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:03:29.145706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:03:29.178990 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:03:29.184166 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:03:29.196031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:03:29.210592 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:03:29.210626 kernel: AES CTR mode by8 optimization enabled Dec 13 01:03:29.196270 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:29.200174 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:03:29.213341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:29.241152 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 01:03:29.213681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.213814 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.233239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.263986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.270951 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:03:29.270988 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:03:29.276083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:29.281306 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:03:29.282653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.298591 kernel: PTP clock support registered Dec 13 01:03:29.291298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.312681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.325844 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:03:29.331945 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:03:29.332042 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:03:29.341445 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:03:29.341514 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:03:29.345466 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:03:29.347179 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:03:29.820354 systemd-resolved[265]: Clock change detected. Flushing caches. Dec 13 01:03:29.838226 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:03:29.843022 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:03:29.845970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.856869 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:03:29.856927 kernel: scsi host1: storvsc_host_t Dec 13 01:03:29.857179 kernel: scsi host0: storvsc_host_t Dec 13 01:03:29.873699 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:03:29.873779 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:03:29.874010 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:03:29.874298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:03:29.881035 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:03:29.919298 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:03:29.928503 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: VF slot 1 added Dec 13 01:03:29.928721 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:03:29.928742 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:03:29.916829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:29.938194 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:03:29.942257 kernel: hv_pci 68d7c4cc-67b9-4bd6-b001-e0585f0ff94a: PCI VMBus probing: Using version 0x10004 Dec 13 01:03:30.001030 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:03:30.001430 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:03:30.001609 kernel: hv_pci 68d7c4cc-67b9-4bd6-b001-e0585f0ff94a: PCI host bridge to bus 67b9:00 Dec 13 01:03:30.001766 kernel: pci_bus 67b9:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 01:03:30.001926 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:03:30.002077 kernel: pci_bus 67b9:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:03:30.002230 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:03:30.002792 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:03:30.002985 kernel: pci 67b9:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 01:03:30.003179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:03:30.003200 kernel: pci 67b9:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:03:30.003413 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:03:30.003582 kernel: pci 67b9:00:02.0: enabling Extended Tags Dec 13 01:03:30.003761 kernel: pci 67b9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 67b9:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 01:03:30.003945 kernel: pci_bus 67b9:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:03:30.004100 kernel: pci 67b9:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:03:30.176137 kernel: mlx5_core 67b9:00:02.0: enabling device (0000 -> 0002) Dec 13 01:03:30.408644 kernel: mlx5_core 67b9:00:02.0: firmware version: 14.30.5000 Dec 13 01:03:30.408927 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: VF registering: eth1 Dec 13 01:03:30.409128 kernel: mlx5_core 67b9:00:02.0 eth1: joined to eth0 Dec 13 01:03:30.409386 kernel: mlx5_core 67b9:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 01:03:30.417228 kernel: mlx5_core 67b9:00:02.0 enP26553s1: renamed from eth1 Dec 13 01:03:30.483202 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:03:30.564239 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (453) Dec 13 01:03:30.568130 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:03:30.592006 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:03:30.593367 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:03:30.609751 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:03:30.632241 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Dec 13 01:03:30.652724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:03:31.636245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:03:31.636344 disk-uuid[601]: The operation has completed successfully. Dec 13 01:03:31.746612 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:03:31.746741 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:03:31.780526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:03:31.789122 sh[718]: Success Dec 13 01:03:31.828419 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:03:32.053967 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:03:32.072374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:03:32.075084 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:03:32.106324 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:03:32.106426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:32.110228 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:03:32.113301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:03:32.116038 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:03:32.432705 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:03:32.439015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:03:32.450459 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:03:32.457468 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:03:32.470163 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:32.470287 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:32.473688 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:03:32.494954 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:03:32.510241 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:32.510858 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:03:32.521706 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:03:32.535593 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:03:32.576915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:03:32.586627 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:03:32.608317 systemd-networkd[902]: lo: Link UP Dec 13 01:03:32.608328 systemd-networkd[902]: lo: Gained carrier Dec 13 01:03:32.610585 systemd-networkd[902]: Enumeration completed Dec 13 01:03:32.610905 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:03:32.611918 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:32.611922 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:03:32.614001 systemd[1]: Reached target network.target - Network. Dec 13 01:03:32.684240 kernel: mlx5_core 67b9:00:02.0 enP26553s1: Link up Dec 13 01:03:32.718235 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: Data path switched to VF: enP26553s1 Dec 13 01:03:32.718338 systemd-networkd[902]: enP26553s1: Link UP Dec 13 01:03:32.718477 systemd-networkd[902]: eth0: Link UP Dec 13 01:03:32.718631 systemd-networkd[902]: eth0: Gained carrier Dec 13 01:03:32.718645 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:32.725536 systemd-networkd[902]: enP26553s1: Gained carrier Dec 13 01:03:32.751299 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:03:33.353203 ignition[840]: Ignition 2.19.0 Dec 13 01:03:33.353239 ignition[840]: Stage: fetch-offline Dec 13 01:03:33.353294 ignition[840]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.356846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:03:33.353305 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.353430 ignition[840]: parsed url from cmdline: "" Dec 13 01:03:33.353434 ignition[840]: no config URL provided Dec 13 01:03:33.353441 ignition[840]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:03:33.353452 ignition[840]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:03:33.371467 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:03:33.353460 ignition[840]: failed to fetch config: resource requires networking Dec 13 01:03:33.355587 ignition[840]: Ignition finished successfully Dec 13 01:03:33.390816 ignition[910]: Ignition 2.19.0 Dec 13 01:03:33.390828 ignition[910]: Stage: fetch Dec 13 01:03:33.391091 ignition[910]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.391106 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.391231 ignition[910]: parsed url from cmdline: "" Dec 13 01:03:33.391235 ignition[910]: no config URL provided Dec 13 01:03:33.391242 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:03:33.391251 ignition[910]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:03:33.391277 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:03:33.472144 ignition[910]: GET result: OK Dec 13 01:03:33.472342 ignition[910]: config has been read from IMDS userdata Dec 13 01:03:33.472385 ignition[910]: parsing config with SHA512: dca9aa0709d2752a8fdb3958d8bc7d06164da6b9fef83b075581d2f00fe5d1e8ddd7d94daaeed50de53162e0e55328c88c7d93fa9c7bb695556b08e55df039a5 Dec 13 01:03:33.480165 unknown[910]: fetched base config from "system" Dec 13 01:03:33.480179 unknown[910]: fetched base config from "system" Dec 13 01:03:33.480201 unknown[910]: fetched user config from "azure" Dec 13 01:03:33.486638 ignition[910]: fetch: fetch complete Dec 13 01:03:33.486645 ignition[910]: fetch: fetch passed Dec 13 01:03:33.488640 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:03:33.486722 ignition[910]: Ignition finished successfully Dec 13 01:03:33.506686 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:03:33.527654 ignition[916]: Ignition 2.19.0 Dec 13 01:03:33.527667 ignition[916]: Stage: kargs Dec 13 01:03:33.527928 ignition[916]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.527943 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.534614 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:03:33.528990 ignition[916]: kargs: kargs passed Dec 13 01:03:33.529041 ignition[916]: Ignition finished successfully Dec 13 01:03:33.552520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:03:33.570077 ignition[922]: Ignition 2.19.0 Dec 13 01:03:33.570090 ignition[922]: Stage: disks Dec 13 01:03:33.572747 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:03:33.570377 ignition[922]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.570394 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.571431 ignition[922]: disks: disks passed Dec 13 01:03:33.571487 ignition[922]: Ignition finished successfully Dec 13 01:03:33.588254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:03:33.591350 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:03:33.601258 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:03:33.606726 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:03:33.609490 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:03:33.629514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:03:33.689779 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:03:33.696900 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:03:33.712425 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:03:33.808235 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:03:33.809505 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:03:33.814464 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:03:33.860358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:03:33.867347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:03:33.875411 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:03:33.879177 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Dec 13 01:03:33.892907 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:33.893001 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:33.893028 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:03:33.892798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:03:33.892869 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:03:33.900227 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:03:33.909416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:03:33.917081 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:03:33.924239 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:03:34.384583 systemd-networkd[902]: eth0: Gained IPv6LL Dec 13 01:03:34.512956 systemd-networkd[902]: enP26553s1: Gained IPv6LL Dec 13 01:03:34.703287 coreos-metadata[943]: Dec 13 01:03:34.703 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:03:34.711013 coreos-metadata[943]: Dec 13 01:03:34.710 INFO Fetch successful Dec 13 01:03:34.714312 coreos-metadata[943]: Dec 13 01:03:34.712 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:03:34.733877 coreos-metadata[943]: Dec 13 01:03:34.733 INFO Fetch successful Dec 13 01:03:34.741373 coreos-metadata[943]: Dec 13 01:03:34.741 INFO wrote hostname ci-4081.2.1-a-672c6884da to /sysroot/etc/hostname Dec 13 01:03:34.745673 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:03:34.750187 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:03:34.777820 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:03:34.786824 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:03:34.792666 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:03:35.659470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:03:35.672508 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:03:35.680480 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:03:35.694285 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:03:35.702392 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:35.715649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:03:35.737308 ignition[1061]: INFO : Ignition 2.19.0 Dec 13 01:03:35.737308 ignition[1061]: INFO : Stage: mount Dec 13 01:03:35.744983 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:35.744983 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:35.744983 ignition[1061]: INFO : mount: mount passed Dec 13 01:03:35.744983 ignition[1061]: INFO : Ignition finished successfully Dec 13 01:03:35.740706 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:03:35.756394 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:03:35.767856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:03:35.786620 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1071) Dec 13 01:03:35.786694 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:35.790225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:35.794920 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:03:35.800231 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:03:35.802448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:03:35.829371 ignition[1088]: INFO : Ignition 2.19.0 Dec 13 01:03:35.829371 ignition[1088]: INFO : Stage: files Dec 13 01:03:35.834429 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:35.834429 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:35.834429 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:03:35.858910 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:03:35.858910 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:03:35.934881 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:03:35.939475 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:03:35.939475 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:03:35.935551 unknown[1088]: wrote ssh authorized keys file for user: core Dec 13 01:03:35.965203 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:03:35.972860 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:03:35.972860 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:03:35.972860 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:03:36.188431 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:03:36.276771 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:03:36.760722 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:03:37.069097 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:37.069097 ignition[1088]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:03:37.110115 ignition[1088]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: files passed Dec 13 01:03:37.119836 ignition[1088]: INFO : Ignition finished successfully Dec 13 01:03:37.112968 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:03:37.136607 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:03:37.181528 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:03:37.191287 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:03:37.194841 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:03:37.215159 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:03:37.215159 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:03:37.220503 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:03:37.219361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:03:37.222721 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:03:37.242513 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:03:37.285728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:03:37.285871 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:03:37.292330 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:03:37.300696 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:03:37.306446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:03:37.314582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:03:37.330617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:03:37.339490 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:03:37.350662 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:03:37.350985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:03:37.351455 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:03:37.351913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:03:37.352076 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:03:37.353312 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:03:37.353711 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:03:37.354187 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:03:37.355103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:03:37.355997 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:03:37.356473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:03:37.356909 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:03:37.357365 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:03:37.357812 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:03:37.358393 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:03:37.358807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:03:37.358972 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:03:37.359736 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:03:37.360177 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:03:37.360574 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:03:37.481928 ignition[1140]: INFO : Ignition 2.19.0 Dec 13 01:03:37.481928 ignition[1140]: INFO : Stage: umount Dec 13 01:03:37.481928 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:37.481928 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:37.481928 ignition[1140]: INFO : umount: umount passed Dec 13 01:03:37.481928 ignition[1140]: INFO : Ignition finished successfully Dec 13 01:03:37.396571 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:03:37.403127 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:03:37.403366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:03:37.409587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:03:37.409776 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:03:37.414714 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:03:37.414910 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:03:37.423126 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:03:37.425960 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:03:37.453656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:03:37.476745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:03:37.481945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:03:37.482201 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:03:37.533102 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:03:37.533342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:03:37.544035 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:03:37.544176 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:03:37.553647 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:03:37.555160 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:03:37.555479 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:03:37.564577 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:03:37.564673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:03:37.572319 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:03:37.572416 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:03:37.577616 systemd[1]: Stopped target network.target - Network. Dec 13 01:03:37.584895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:03:37.585010 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:03:37.594120 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:03:37.596640 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:03:37.601706 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:03:37.605241 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:03:37.607616 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:03:37.610147 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:03:37.610229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:03:37.617132 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:03:37.617200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:03:37.624765 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:03:37.627052 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:03:37.632166 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:03:37.632263 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:03:37.640546 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:03:37.644081 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:03:37.652627 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:03:37.652734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:03:37.669765 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:03:37.669888 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:03:37.672317 systemd-networkd[902]: eth0: DHCPv6 lease lost Dec 13 01:03:37.677278 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:03:37.677407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:03:37.681858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:03:37.681940 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:03:37.708490 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:03:37.715640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:03:37.715772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:03:37.726704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:03:37.726817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:03:37.732581 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:03:37.733064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:03:37.746303 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:03:37.746432 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:03:37.757804 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:03:37.778034 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:03:37.778248 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:03:37.788519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:03:37.788597 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:03:37.797051 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:03:37.797122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:03:37.805461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:03:37.805566 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:03:37.813873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:03:37.813981 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:03:37.826975 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: Data path switched from VF: enP26553s1 Dec 13 01:03:37.824063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:03:37.824159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:37.839516 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:03:37.842658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:03:37.842741 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:03:37.849228 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:03:37.849297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:03:37.859141 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:03:37.859343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:03:37.865642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:37.865708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:37.866197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:03:37.866833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:03:37.893933 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:03:37.894111 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:03:38.126500 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:03:38.126648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:03:38.134918 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:03:38.140963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:03:38.141076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:03:38.157598 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:03:38.249657 systemd[1]: Switching root. Dec 13 01:03:38.287494 systemd-journald[176]: Journal stopped Dec 13 01:03:28.144256 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Dec 12 23:15:00 -00 2024 Dec 13 01:03:28.144307 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:03:28.144324 kernel: BIOS-provided physical RAM map: Dec 13 01:03:28.144335 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 01:03:28.144346 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Dec 13 01:03:28.144357 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Dec 13 01:03:28.144371 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Dec 13 01:03:28.144387 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Dec 13 01:03:28.144399 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Dec 13 01:03:28.144410 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Dec 13 01:03:28.144443 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Dec 13 01:03:28.144455 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Dec 13 01:03:28.144466 kernel: printk: bootconsole [earlyser0] enabled Dec 13 01:03:28.144478 kernel: NX (Execute Disable) protection: active Dec 13 01:03:28.144498 kernel: APIC: Static calls initialized Dec 13 01:03:28.144511 kernel: efi: EFI v2.7 by Microsoft Dec 13 01:03:28.144524 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee75a98 Dec 13 01:03:28.144537 kernel: SMBIOS 3.1.0 present. Dec 13 01:03:28.144550 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Dec 13 01:03:28.144563 kernel: Hypervisor detected: Microsoft Hyper-V Dec 13 01:03:28.144576 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Dec 13 01:03:28.144590 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Dec 13 01:03:28.144603 kernel: Hyper-V: Nested features: 0x1e0101 Dec 13 01:03:28.144615 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Dec 13 01:03:28.144633 kernel: Hyper-V: Using hypercall for remote TLB flush Dec 13 01:03:28.144646 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:03:28.144659 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Dec 13 01:03:28.144673 kernel: tsc: Marking TSC unstable due to running on Hyper-V Dec 13 01:03:28.144687 kernel: tsc: Detected 2593.906 MHz processor Dec 13 01:03:28.144700 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 01:03:28.144714 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 01:03:28.144727 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Dec 13 01:03:28.144741 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 01:03:28.144757 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 01:03:28.144770 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Dec 13 01:03:28.144783 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Dec 13 01:03:28.144797 kernel: Using GB pages for direct mapping Dec 13 01:03:28.144810 kernel: Secure boot disabled Dec 13 01:03:28.144823 kernel: ACPI: Early table checksum verification disabled Dec 13 01:03:28.144837 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Dec 13 01:03:28.144857 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144876 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144890 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Dec 13 01:03:28.144905 kernel: ACPI: FACS 0x000000003FFFE000 000040 Dec 13 01:03:28.144919 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144933 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144947 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144965 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144979 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.144993 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.145008 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Dec 13 01:03:28.145023 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Dec 13 01:03:28.145037 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Dec 13 01:03:28.145051 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Dec 13 01:03:28.145065 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Dec 13 01:03:28.145083 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Dec 13 01:03:28.145097 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Dec 13 01:03:28.145111 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Dec 13 01:03:28.145126 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Dec 13 01:03:28.145141 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Dec 13 01:03:28.145155 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Dec 13 01:03:28.145169 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 01:03:28.145184 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 01:03:28.145199 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Dec 13 01:03:28.145216 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Dec 13 01:03:28.145229 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Dec 13 01:03:28.145244 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Dec 13 01:03:28.145258 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Dec 13 01:03:28.145272 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Dec 13 01:03:28.145287 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Dec 13 01:03:28.145301 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Dec 13 01:03:28.145316 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Dec 13 01:03:28.145331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Dec 13 01:03:28.145349 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Dec 13 01:03:28.145363 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Dec 13 01:03:28.145378 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Dec 13 01:03:28.145392 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Dec 13 01:03:28.145406 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Dec 13 01:03:28.145477 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Dec 13 01:03:28.145492 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Dec 13 01:03:28.145504 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Dec 13 01:03:28.145517 kernel: Zone ranges: Dec 13 01:03:28.145535 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 01:03:28.145547 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 01:03:28.145560 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:03:28.145573 kernel: Movable zone start for each node Dec 13 01:03:28.145586 kernel: Early memory node ranges Dec 13 01:03:28.145599 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 01:03:28.145613 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Dec 13 01:03:28.145625 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Dec 13 01:03:28.145638 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Dec 13 01:03:28.145656 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Dec 13 01:03:28.145669 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 01:03:28.145682 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 01:03:28.145695 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Dec 13 01:03:28.145711 kernel: ACPI: PM-Timer IO Port: 0x408 Dec 13 01:03:28.145733 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Dec 13 01:03:28.145746 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Dec 13 01:03:28.145759 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 01:03:28.145772 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 01:03:28.145788 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Dec 13 01:03:28.145800 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Dec 13 01:03:28.145811 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Dec 13 01:03:28.145824 kernel: Booting paravirtualized kernel on Hyper-V Dec 13 01:03:28.145836 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 01:03:28.145849 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Dec 13 01:03:28.145860 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Dec 13 01:03:28.145873 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Dec 13 01:03:28.145887 kernel: pcpu-alloc: [0] 0 1 Dec 13 01:03:28.145903 kernel: Hyper-V: PV spinlocks enabled Dec 13 01:03:28.145915 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 01:03:28.145929 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:03:28.145941 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 01:03:28.145952 kernel: random: crng init done Dec 13 01:03:28.145964 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Dec 13 01:03:28.145977 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 01:03:28.145990 kernel: Fallback order for Node 0: 0 Dec 13 01:03:28.146008 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Dec 13 01:03:28.146031 kernel: Policy zone: Normal Dec 13 01:03:28.146050 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 01:03:28.146065 kernel: software IO TLB: area num 2. Dec 13 01:03:28.146078 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2299K rwdata, 22724K rodata, 42844K init, 2348K bss, 310124K reserved, 0K cma-reserved) Dec 13 01:03:28.146092 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Dec 13 01:03:28.146107 kernel: ftrace: allocating 37902 entries in 149 pages Dec 13 01:03:28.146120 kernel: ftrace: allocated 149 pages with 4 groups Dec 13 01:03:28.146134 kernel: Dynamic Preempt: voluntary Dec 13 01:03:28.146148 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 01:03:28.146164 kernel: rcu: RCU event tracing is enabled. Dec 13 01:03:28.146185 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Dec 13 01:03:28.146197 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 01:03:28.146210 kernel: Rude variant of Tasks RCU enabled. Dec 13 01:03:28.146223 kernel: Tracing variant of Tasks RCU enabled. Dec 13 01:03:28.146238 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 01:03:28.146258 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Dec 13 01:03:28.146273 kernel: Using NULL legacy PIC Dec 13 01:03:28.146287 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Dec 13 01:03:28.146303 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 01:03:28.146317 kernel: Console: colour dummy device 80x25 Dec 13 01:03:28.146330 kernel: printk: console [tty1] enabled Dec 13 01:03:28.146344 kernel: printk: console [ttyS0] enabled Dec 13 01:03:28.146359 kernel: printk: bootconsole [earlyser0] disabled Dec 13 01:03:28.146371 kernel: ACPI: Core revision 20230628 Dec 13 01:03:28.146386 kernel: Failed to register legacy timer interrupt Dec 13 01:03:28.146406 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 01:03:28.150735 kernel: Hyper-V: enabling crash_kexec_post_notifiers Dec 13 01:03:28.150751 kernel: Hyper-V: Using IPI hypercalls Dec 13 01:03:28.150762 kernel: APIC: send_IPI() replaced with hv_send_ipi() Dec 13 01:03:28.150771 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Dec 13 01:03:28.150783 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Dec 13 01:03:28.150792 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Dec 13 01:03:28.150804 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Dec 13 01:03:28.150812 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Dec 13 01:03:28.150833 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Dec 13 01:03:28.150842 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 01:03:28.150853 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 01:03:28.150862 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 01:03:28.150873 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 01:03:28.150881 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 01:03:28.150891 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 01:03:28.150901 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Dec 13 01:03:28.150912 kernel: RETBleed: Vulnerable Dec 13 01:03:28.150925 kernel: Speculative Store Bypass: Vulnerable Dec 13 01:03:28.150935 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:03:28.150947 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Dec 13 01:03:28.150955 kernel: GDS: Unknown: Dependent on hypervisor status Dec 13 01:03:28.150965 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 01:03:28.150977 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 01:03:28.150985 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 01:03:28.150995 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Dec 13 01:03:28.151005 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Dec 13 01:03:28.151013 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Dec 13 01:03:28.151024 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 01:03:28.151035 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Dec 13 01:03:28.151046 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Dec 13 01:03:28.151055 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Dec 13 01:03:28.151063 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Dec 13 01:03:28.151074 kernel: Freeing SMP alternatives memory: 32K Dec 13 01:03:28.151083 kernel: pid_max: default: 32768 minimum: 301 Dec 13 01:03:28.151094 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 01:03:28.151102 kernel: landlock: Up and running. Dec 13 01:03:28.151111 kernel: SELinux: Initializing. Dec 13 01:03:28.151122 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151130 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151141 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Dec 13 01:03:28.151153 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:03:28.151165 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:03:28.151174 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Dec 13 01:03:28.151182 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Dec 13 01:03:28.151193 kernel: signal: max sigframe size: 3632 Dec 13 01:03:28.151201 kernel: rcu: Hierarchical SRCU implementation. Dec 13 01:03:28.151214 kernel: rcu: Max phase no-delay instances is 400. Dec 13 01:03:28.151222 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 01:03:28.151233 kernel: smp: Bringing up secondary CPUs ... Dec 13 01:03:28.151245 kernel: smpboot: x86: Booting SMP configuration: Dec 13 01:03:28.151254 kernel: .... node #0, CPUs: #1 Dec 13 01:03:28.151265 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Dec 13 01:03:28.151275 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 01:03:28.151286 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 01:03:28.151295 kernel: smpboot: Max logical packages: 1 Dec 13 01:03:28.151305 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Dec 13 01:03:28.151314 kernel: devtmpfs: initialized Dec 13 01:03:28.151327 kernel: x86/mm: Memory block size: 128MB Dec 13 01:03:28.151338 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Dec 13 01:03:28.151349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 01:03:28.151358 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Dec 13 01:03:28.151369 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 01:03:28.151379 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 01:03:28.151389 kernel: audit: initializing netlink subsys (disabled) Dec 13 01:03:28.151400 kernel: audit: type=2000 audit(1734051806.028:1): state=initialized audit_enabled=0 res=1 Dec 13 01:03:28.151411 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 01:03:28.151451 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 01:03:28.151463 kernel: cpuidle: using governor menu Dec 13 01:03:28.151471 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 01:03:28.151482 kernel: dca service started, version 1.12.1 Dec 13 01:03:28.151491 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Dec 13 01:03:28.151500 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 01:03:28.151511 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 01:03:28.151520 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 01:03:28.151531 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 01:03:28.151542 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 01:03:28.151554 kernel: ACPI: Added _OSI(Module Device) Dec 13 01:03:28.151563 kernel: ACPI: Added _OSI(Processor Device) Dec 13 01:03:28.151571 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 01:03:28.151582 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 01:03:28.151590 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 01:03:28.151601 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 01:03:28.151610 kernel: ACPI: Interpreter enabled Dec 13 01:03:28.151618 kernel: ACPI: PM: (supports S0 S5) Dec 13 01:03:28.151632 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 01:03:28.151641 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 01:03:28.151652 kernel: PCI: Ignoring E820 reservations for host bridge windows Dec 13 01:03:28.151661 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Dec 13 01:03:28.151670 kernel: iommu: Default domain type: Translated Dec 13 01:03:28.151681 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 01:03:28.151689 kernel: efivars: Registered efivars operations Dec 13 01:03:28.151700 kernel: PCI: Using ACPI for IRQ routing Dec 13 01:03:28.151709 kernel: PCI: System does not support PCI Dec 13 01:03:28.151721 kernel: vgaarb: loaded Dec 13 01:03:28.151731 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Dec 13 01:03:28.151739 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 01:03:28.151751 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 01:03:28.151759 kernel: pnp: PnP ACPI init Dec 13 01:03:28.151770 kernel: pnp: PnP ACPI: found 3 devices Dec 13 01:03:28.151779 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 01:03:28.151791 kernel: NET: Registered PF_INET protocol family Dec 13 01:03:28.151800 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 01:03:28.151814 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Dec 13 01:03:28.151826 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 01:03:28.151835 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 01:03:28.151845 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Dec 13 01:03:28.151856 kernel: TCP: Hash tables configured (established 65536 bind 65536) Dec 13 01:03:28.151865 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151876 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Dec 13 01:03:28.151885 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 01:03:28.151894 kernel: NET: Registered PF_XDP protocol family Dec 13 01:03:28.151907 kernel: PCI: CLS 0 bytes, default 64 Dec 13 01:03:28.151915 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 01:03:28.151927 kernel: software IO TLB: mapped [mem 0x000000003ae75000-0x000000003ee75000] (64MB) Dec 13 01:03:28.151935 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 01:03:28.151946 kernel: Initialise system trusted keyrings Dec 13 01:03:28.151955 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Dec 13 01:03:28.151964 kernel: Key type asymmetric registered Dec 13 01:03:28.151974 kernel: Asymmetric key parser 'x509' registered Dec 13 01:03:28.151982 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 01:03:28.151997 kernel: io scheduler mq-deadline registered Dec 13 01:03:28.152005 kernel: io scheduler kyber registered Dec 13 01:03:28.152015 kernel: io scheduler bfq registered Dec 13 01:03:28.152025 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 01:03:28.152033 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 01:03:28.152044 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 01:03:28.152053 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 01:03:28.152064 kernel: i8042: PNP: No PS/2 controller found. Dec 13 01:03:28.152279 kernel: rtc_cmos 00:02: registered as rtc0 Dec 13 01:03:28.152437 kernel: rtc_cmos 00:02: setting system clock to 2024-12-13T01:03:27 UTC (1734051807) Dec 13 01:03:28.152566 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Dec 13 01:03:28.152586 kernel: intel_pstate: CPU model not supported Dec 13 01:03:28.152601 kernel: efifb: probing for efifb Dec 13 01:03:28.152618 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Dec 13 01:03:28.152634 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Dec 13 01:03:28.152650 kernel: efifb: scrolling: redraw Dec 13 01:03:28.152670 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 01:03:28.152686 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:03:28.152702 kernel: fb0: EFI VGA frame buffer device Dec 13 01:03:28.152717 kernel: pstore: Using crash dump compression: deflate Dec 13 01:03:28.152734 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 01:03:28.152750 kernel: NET: Registered PF_INET6 protocol family Dec 13 01:03:28.152765 kernel: Segment Routing with IPv6 Dec 13 01:03:28.152780 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 01:03:28.152796 kernel: NET: Registered PF_PACKET protocol family Dec 13 01:03:28.152812 kernel: Key type dns_resolver registered Dec 13 01:03:28.152831 kernel: IPI shorthand broadcast: enabled Dec 13 01:03:28.152847 kernel: sched_clock: Marking stable (999003400, 61079000)->(1380437500, -320355100) Dec 13 01:03:28.152862 kernel: registered taskstats version 1 Dec 13 01:03:28.152878 kernel: Loading compiled-in X.509 certificates Dec 13 01:03:28.152894 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: c82d546f528d79a5758dcebbc47fb6daf92836a0' Dec 13 01:03:28.152909 kernel: Key type .fscrypt registered Dec 13 01:03:28.152925 kernel: Key type fscrypt-provisioning registered Dec 13 01:03:28.152941 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 01:03:28.152960 kernel: ima: Allocated hash algorithm: sha1 Dec 13 01:03:28.152976 kernel: ima: No architecture policies found Dec 13 01:03:28.152991 kernel: clk: Disabling unused clocks Dec 13 01:03:28.153011 kernel: Freeing unused kernel image (initmem) memory: 42844K Dec 13 01:03:28.153027 kernel: Write protecting the kernel read-only data: 36864k Dec 13 01:03:28.153042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Dec 13 01:03:28.153058 kernel: Run /init as init process Dec 13 01:03:28.153074 kernel: with arguments: Dec 13 01:03:28.153089 kernel: /init Dec 13 01:03:28.153107 kernel: with environment: Dec 13 01:03:28.153123 kernel: HOME=/ Dec 13 01:03:28.153138 kernel: TERM=linux Dec 13 01:03:28.153154 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 01:03:28.153172 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:03:28.153192 systemd[1]: Detected virtualization microsoft. Dec 13 01:03:28.153208 systemd[1]: Detected architecture x86-64. Dec 13 01:03:28.153222 systemd[1]: Running in initrd. Dec 13 01:03:28.153242 systemd[1]: No hostname configured, using default hostname. Dec 13 01:03:28.153257 systemd[1]: Hostname set to . Dec 13 01:03:28.153274 systemd[1]: Initializing machine ID from random generator. Dec 13 01:03:28.153289 systemd[1]: Queued start job for default target initrd.target. Dec 13 01:03:28.153305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:03:28.153322 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:03:28.153340 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 01:03:28.153356 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:03:28.153375 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 01:03:28.153392 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 01:03:28.153411 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 01:03:28.153455 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 01:03:28.153471 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:03:28.153488 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:03:28.153505 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:03:28.153525 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:03:28.153541 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:03:28.153557 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:03:28.153574 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:03:28.153591 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:03:28.153607 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:03:28.153624 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:03:28.153640 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:03:28.153656 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:03:28.153677 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:03:28.153693 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:03:28.153709 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 01:03:28.153725 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:03:28.153742 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 01:03:28.153758 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 01:03:28.153774 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:03:28.153791 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:03:28.153811 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:28.153864 systemd-journald[176]: Collecting audit messages is disabled. Dec 13 01:03:28.153896 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 01:03:28.153909 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:03:28.153929 systemd-journald[176]: Journal started Dec 13 01:03:28.153982 systemd-journald[176]: Runtime Journal (/run/log/journal/17930e43c7a845cc9e7a45da3fadd5d0) is 8.0M, max 158.8M, 150.8M free. Dec 13 01:03:28.158481 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:03:28.159372 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 01:03:28.170798 systemd-modules-load[177]: Inserted module 'overlay' Dec 13 01:03:28.173301 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:03:28.174634 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:03:28.181797 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:28.206782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:03:28.210617 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:03:28.236730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:03:28.241492 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 01:03:28.250615 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:03:28.257117 kernel: Bridge firewalling registered Dec 13 01:03:28.262522 systemd-modules-load[177]: Inserted module 'br_netfilter' Dec 13 01:03:28.264910 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:28.272236 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:03:28.275453 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:03:28.290813 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 01:03:28.299632 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:03:28.307745 dracut-cmdline[207]: dracut-dracut-053 Dec 13 01:03:28.307745 dracut-cmdline[207]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=2fdbba50b59d8c8a9877a81151806ddc16f473fe99b9ba0d8825997d654583ff Dec 13 01:03:28.341936 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:03:28.356844 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:03:28.400083 systemd-resolved[265]: Positive Trust Anchors: Dec 13 01:03:28.400101 systemd-resolved[265]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:03:28.400158 systemd-resolved[265]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:03:28.427343 systemd-resolved[265]: Defaulting to hostname 'linux'. Dec 13 01:03:28.431437 kernel: SCSI subsystem initialized Dec 13 01:03:28.432660 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:03:28.438983 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:03:28.450436 kernel: Loading iSCSI transport class v2.0-870. Dec 13 01:03:28.462454 kernel: iscsi: registered transport (tcp) Dec 13 01:03:28.487481 kernel: iscsi: registered transport (qla4xxx) Dec 13 01:03:28.487610 kernel: QLogic iSCSI HBA Driver Dec 13 01:03:28.526134 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 01:03:28.535765 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 01:03:28.570879 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 01:03:28.571002 kernel: device-mapper: uevent: version 1.0.3 Dec 13 01:03:28.574383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 01:03:28.620457 kernel: raid6: avx512x4 gen() 17354 MB/s Dec 13 01:03:28.639442 kernel: raid6: avx512x2 gen() 27559 MB/s Dec 13 01:03:28.658429 kernel: raid6: avx512x1 gen() 27745 MB/s Dec 13 01:03:28.678430 kernel: raid6: avx2x4 gen() 24621 MB/s Dec 13 01:03:28.697426 kernel: raid6: avx2x2 gen() 24635 MB/s Dec 13 01:03:28.718221 kernel: raid6: avx2x1 gen() 21673 MB/s Dec 13 01:03:28.718261 kernel: raid6: using algorithm avx512x1 gen() 27745 MB/s Dec 13 01:03:28.740398 kernel: raid6: .... xor() 25994 MB/s, rmw enabled Dec 13 01:03:28.740448 kernel: raid6: using avx512x2 recovery algorithm Dec 13 01:03:28.763446 kernel: xor: automatically using best checksumming function avx Dec 13 01:03:28.912448 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 01:03:28.922776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:03:28.931772 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:03:28.944474 systemd-udevd[397]: Using default interface naming scheme 'v255'. Dec 13 01:03:28.949020 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:03:28.963622 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 01:03:28.978118 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Dec 13 01:03:29.010922 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:03:29.018854 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:03:29.063085 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:03:29.080652 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 01:03:29.109978 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 01:03:29.119294 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:03:29.123144 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:03:29.129762 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:03:29.145706 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 01:03:29.178990 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:03:29.184166 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 01:03:29.196031 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:03:29.210592 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 01:03:29.210626 kernel: AES CTR mode by8 optimization enabled Dec 13 01:03:29.196270 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:29.200174 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:03:29.213341 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:29.241152 kernel: hv_vmbus: Vmbus version:5.2 Dec 13 01:03:29.213681 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.213814 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.233239 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.263986 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.270951 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 01:03:29.270988 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 01:03:29.276083 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:29.281306 kernel: hv_vmbus: registering driver hv_netvsc Dec 13 01:03:29.282653 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.298591 kernel: PTP clock support registered Dec 13 01:03:29.291298 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.312681 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:29.325844 kernel: hv_vmbus: registering driver hyperv_keyboard Dec 13 01:03:29.331945 kernel: hv_utils: Registering HyperV Utility Driver Dec 13 01:03:29.332042 kernel: hv_vmbus: registering driver hv_utils Dec 13 01:03:29.341445 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Dec 13 01:03:29.341514 kernel: hv_utils: Heartbeat IC version 3.0 Dec 13 01:03:29.345466 kernel: hv_utils: Shutdown IC version 3.2 Dec 13 01:03:29.347179 kernel: hv_utils: TimeSync IC version 4.0 Dec 13 01:03:29.820354 systemd-resolved[265]: Clock change detected. Flushing caches. Dec 13 01:03:29.838226 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 01:03:29.843022 kernel: hv_vmbus: registering driver hv_storvsc Dec 13 01:03:29.845970 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:29.856869 kernel: hv_vmbus: registering driver hid_hyperv Dec 13 01:03:29.856927 kernel: scsi host1: storvsc_host_t Dec 13 01:03:29.857179 kernel: scsi host0: storvsc_host_t Dec 13 01:03:29.873699 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Dec 13 01:03:29.873779 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Dec 13 01:03:29.874010 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Dec 13 01:03:29.874298 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 01:03:29.881035 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Dec 13 01:03:29.919298 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Dec 13 01:03:29.928503 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: VF slot 1 added Dec 13 01:03:29.928721 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 01:03:29.928742 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Dec 13 01:03:29.916829 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:29.938194 kernel: hv_vmbus: registering driver hv_pci Dec 13 01:03:29.942257 kernel: hv_pci 68d7c4cc-67b9-4bd6-b001-e0585f0ff94a: PCI VMBus probing: Using version 0x10004 Dec 13 01:03:30.001030 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Dec 13 01:03:30.001430 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 01:03:30.001609 kernel: hv_pci 68d7c4cc-67b9-4bd6-b001-e0585f0ff94a: PCI host bridge to bus 67b9:00 Dec 13 01:03:30.001766 kernel: pci_bus 67b9:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Dec 13 01:03:30.001926 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 01:03:30.002077 kernel: pci_bus 67b9:00: No busn resource found for root bus, will use [bus 00-ff] Dec 13 01:03:30.002230 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Dec 13 01:03:30.002792 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Dec 13 01:03:30.002985 kernel: pci 67b9:00:02.0: [15b3:1016] type 00 class 0x020000 Dec 13 01:03:30.003179 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:03:30.003200 kernel: pci 67b9:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:03:30.003413 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 01:03:30.003582 kernel: pci 67b9:00:02.0: enabling Extended Tags Dec 13 01:03:30.003761 kernel: pci 67b9:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 67b9:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Dec 13 01:03:30.003945 kernel: pci_bus 67b9:00: busn_res: [bus 00-ff] end is updated to 00 Dec 13 01:03:30.004100 kernel: pci 67b9:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Dec 13 01:03:30.176137 kernel: mlx5_core 67b9:00:02.0: enabling device (0000 -> 0002) Dec 13 01:03:30.408644 kernel: mlx5_core 67b9:00:02.0: firmware version: 14.30.5000 Dec 13 01:03:30.408927 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: VF registering: eth1 Dec 13 01:03:30.409128 kernel: mlx5_core 67b9:00:02.0 eth1: joined to eth0 Dec 13 01:03:30.409386 kernel: mlx5_core 67b9:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Dec 13 01:03:30.417228 kernel: mlx5_core 67b9:00:02.0 enP26553s1: renamed from eth1 Dec 13 01:03:30.483202 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Dec 13 01:03:30.564239 kernel: BTRFS: device fsid c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be devid 1 transid 41 /dev/sda3 scanned by (udev-worker) (453) Dec 13 01:03:30.568130 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Dec 13 01:03:30.592006 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Dec 13 01:03:30.593367 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Dec 13 01:03:30.609751 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 01:03:30.632241 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (444) Dec 13 01:03:30.652724 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:03:31.636245 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 01:03:31.636344 disk-uuid[601]: The operation has completed successfully. Dec 13 01:03:31.746612 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 01:03:31.746741 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 01:03:31.780526 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 01:03:31.789122 sh[718]: Success Dec 13 01:03:31.828419 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 01:03:32.053967 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 01:03:32.072374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 01:03:32.075084 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 01:03:32.106324 kernel: BTRFS info (device dm-0): first mount of filesystem c3b72f8a-27ca-4d37-9d0e-1ec3c4bdc3be Dec 13 01:03:32.106426 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:32.110228 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 01:03:32.113301 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 01:03:32.116038 kernel: BTRFS info (device dm-0): using free space tree Dec 13 01:03:32.432705 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 01:03:32.439015 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 01:03:32.450459 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 01:03:32.457468 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 01:03:32.470163 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:32.470287 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:32.473688 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:03:32.494954 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:03:32.510241 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:32.510858 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 01:03:32.521706 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 01:03:32.535593 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 01:03:32.576915 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:03:32.586627 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:03:32.608317 systemd-networkd[902]: lo: Link UP Dec 13 01:03:32.608328 systemd-networkd[902]: lo: Gained carrier Dec 13 01:03:32.610585 systemd-networkd[902]: Enumeration completed Dec 13 01:03:32.610905 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:03:32.611918 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:32.611922 systemd-networkd[902]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:03:32.614001 systemd[1]: Reached target network.target - Network. Dec 13 01:03:32.684240 kernel: mlx5_core 67b9:00:02.0 enP26553s1: Link up Dec 13 01:03:32.718235 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: Data path switched to VF: enP26553s1 Dec 13 01:03:32.718338 systemd-networkd[902]: enP26553s1: Link UP Dec 13 01:03:32.718477 systemd-networkd[902]: eth0: Link UP Dec 13 01:03:32.718631 systemd-networkd[902]: eth0: Gained carrier Dec 13 01:03:32.718645 systemd-networkd[902]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:32.725536 systemd-networkd[902]: enP26553s1: Gained carrier Dec 13 01:03:32.751299 systemd-networkd[902]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:03:33.353203 ignition[840]: Ignition 2.19.0 Dec 13 01:03:33.353239 ignition[840]: Stage: fetch-offline Dec 13 01:03:33.353294 ignition[840]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.356846 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:03:33.353305 ignition[840]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.353430 ignition[840]: parsed url from cmdline: "" Dec 13 01:03:33.353434 ignition[840]: no config URL provided Dec 13 01:03:33.353441 ignition[840]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:03:33.353452 ignition[840]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:03:33.371467 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 01:03:33.353460 ignition[840]: failed to fetch config: resource requires networking Dec 13 01:03:33.355587 ignition[840]: Ignition finished successfully Dec 13 01:03:33.390816 ignition[910]: Ignition 2.19.0 Dec 13 01:03:33.390828 ignition[910]: Stage: fetch Dec 13 01:03:33.391091 ignition[910]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.391106 ignition[910]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.391231 ignition[910]: parsed url from cmdline: "" Dec 13 01:03:33.391235 ignition[910]: no config URL provided Dec 13 01:03:33.391242 ignition[910]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 01:03:33.391251 ignition[910]: no config at "/usr/lib/ignition/user.ign" Dec 13 01:03:33.391277 ignition[910]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Dec 13 01:03:33.472144 ignition[910]: GET result: OK Dec 13 01:03:33.472342 ignition[910]: config has been read from IMDS userdata Dec 13 01:03:33.472385 ignition[910]: parsing config with SHA512: dca9aa0709d2752a8fdb3958d8bc7d06164da6b9fef83b075581d2f00fe5d1e8ddd7d94daaeed50de53162e0e55328c88c7d93fa9c7bb695556b08e55df039a5 Dec 13 01:03:33.480165 unknown[910]: fetched base config from "system" Dec 13 01:03:33.480179 unknown[910]: fetched base config from "system" Dec 13 01:03:33.480201 unknown[910]: fetched user config from "azure" Dec 13 01:03:33.486638 ignition[910]: fetch: fetch complete Dec 13 01:03:33.486645 ignition[910]: fetch: fetch passed Dec 13 01:03:33.488640 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 01:03:33.486722 ignition[910]: Ignition finished successfully Dec 13 01:03:33.506686 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 01:03:33.527654 ignition[916]: Ignition 2.19.0 Dec 13 01:03:33.527667 ignition[916]: Stage: kargs Dec 13 01:03:33.527928 ignition[916]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.527943 ignition[916]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.534614 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 01:03:33.528990 ignition[916]: kargs: kargs passed Dec 13 01:03:33.529041 ignition[916]: Ignition finished successfully Dec 13 01:03:33.552520 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 01:03:33.570077 ignition[922]: Ignition 2.19.0 Dec 13 01:03:33.570090 ignition[922]: Stage: disks Dec 13 01:03:33.572747 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 01:03:33.570377 ignition[922]: no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:33.570394 ignition[922]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:33.571431 ignition[922]: disks: disks passed Dec 13 01:03:33.571487 ignition[922]: Ignition finished successfully Dec 13 01:03:33.588254 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 01:03:33.591350 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:03:33.601258 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:03:33.606726 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:03:33.609490 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:03:33.629514 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 01:03:33.689779 systemd-fsck[930]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Dec 13 01:03:33.696900 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 01:03:33.712425 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 01:03:33.808235 kernel: EXT4-fs (sda9): mounted filesystem 390119fa-ab9c-4f50-b046-3b5c76c46193 r/w with ordered data mode. Quota mode: none. Dec 13 01:03:33.809505 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 01:03:33.814464 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 01:03:33.860358 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:03:33.867347 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 01:03:33.875411 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Dec 13 01:03:33.879177 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (941) Dec 13 01:03:33.892907 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:33.893001 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:33.893028 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:03:33.892798 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 01:03:33.892869 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:03:33.900227 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:03:33.909416 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:03:33.917081 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 01:03:33.924239 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 01:03:34.384583 systemd-networkd[902]: eth0: Gained IPv6LL Dec 13 01:03:34.512956 systemd-networkd[902]: enP26553s1: Gained IPv6LL Dec 13 01:03:34.703287 coreos-metadata[943]: Dec 13 01:03:34.703 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:03:34.711013 coreos-metadata[943]: Dec 13 01:03:34.710 INFO Fetch successful Dec 13 01:03:34.714312 coreos-metadata[943]: Dec 13 01:03:34.712 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:03:34.733877 coreos-metadata[943]: Dec 13 01:03:34.733 INFO Fetch successful Dec 13 01:03:34.741373 coreos-metadata[943]: Dec 13 01:03:34.741 INFO wrote hostname ci-4081.2.1-a-672c6884da to /sysroot/etc/hostname Dec 13 01:03:34.745673 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 01:03:34.750187 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:03:34.777820 initrd-setup-root[978]: cut: /sysroot/etc/group: No such file or directory Dec 13 01:03:34.786824 initrd-setup-root[985]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 01:03:34.792666 initrd-setup-root[992]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 01:03:35.659470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 01:03:35.672508 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 01:03:35.680480 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 01:03:35.694285 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 01:03:35.702392 kernel: BTRFS info (device sda6): last unmount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:35.715649 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 01:03:35.737308 ignition[1061]: INFO : Ignition 2.19.0 Dec 13 01:03:35.737308 ignition[1061]: INFO : Stage: mount Dec 13 01:03:35.744983 ignition[1061]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:35.744983 ignition[1061]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:35.744983 ignition[1061]: INFO : mount: mount passed Dec 13 01:03:35.744983 ignition[1061]: INFO : Ignition finished successfully Dec 13 01:03:35.740706 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 01:03:35.756394 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 01:03:35.767856 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 01:03:35.786620 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1071) Dec 13 01:03:35.786694 kernel: BTRFS info (device sda6): first mount of filesystem db063747-cac8-4176-8963-c216c1b11dcb Dec 13 01:03:35.790225 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 01:03:35.794920 kernel: BTRFS info (device sda6): using free space tree Dec 13 01:03:35.800231 kernel: BTRFS info (device sda6): auto enabling async discard Dec 13 01:03:35.802448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 01:03:35.829371 ignition[1088]: INFO : Ignition 2.19.0 Dec 13 01:03:35.829371 ignition[1088]: INFO : Stage: files Dec 13 01:03:35.834429 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:35.834429 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:35.834429 ignition[1088]: DEBUG : files: compiled without relabeling support, skipping Dec 13 01:03:35.858910 ignition[1088]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 01:03:35.858910 ignition[1088]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 01:03:35.934881 ignition[1088]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 01:03:35.939475 ignition[1088]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 01:03:35.939475 ignition[1088]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 01:03:35.935551 unknown[1088]: wrote ssh authorized keys file for user: core Dec 13 01:03:35.965203 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:03:35.972860 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 01:03:35.972860 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:03:35.972860 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 01:03:36.188431 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 01:03:36.276771 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:36.282606 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 01:03:36.760722 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Dec 13 01:03:37.069097 ignition[1088]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 01:03:37.069097 ignition[1088]: INFO : files: op(c): [started] processing unit "containerd.service" Dec 13 01:03:37.110115 ignition[1088]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(c): [finished] processing unit "containerd.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 01:03:37.119836 ignition[1088]: INFO : files: files passed Dec 13 01:03:37.119836 ignition[1088]: INFO : Ignition finished successfully Dec 13 01:03:37.112968 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 01:03:37.136607 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 01:03:37.181528 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 01:03:37.191287 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 01:03:37.194841 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 01:03:37.215159 initrd-setup-root-after-ignition[1116]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:03:37.215159 initrd-setup-root-after-ignition[1116]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:03:37.220503 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 01:03:37.219361 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:03:37.222721 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 01:03:37.242513 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 01:03:37.285728 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 01:03:37.285871 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 01:03:37.292330 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 01:03:37.300696 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 01:03:37.306446 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 01:03:37.314582 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 01:03:37.330617 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:03:37.339490 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 01:03:37.350662 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:03:37.350985 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:03:37.351455 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 01:03:37.351913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 01:03:37.352076 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 01:03:37.353312 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 01:03:37.353711 systemd[1]: Stopped target basic.target - Basic System. Dec 13 01:03:37.354187 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 01:03:37.355103 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 01:03:37.355997 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 01:03:37.356473 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 01:03:37.356909 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 01:03:37.357365 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 01:03:37.357812 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 01:03:37.358393 systemd[1]: Stopped target swap.target - Swaps. Dec 13 01:03:37.358807 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 01:03:37.358972 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 01:03:37.359736 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:03:37.360177 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:03:37.360574 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 01:03:37.481928 ignition[1140]: INFO : Ignition 2.19.0 Dec 13 01:03:37.481928 ignition[1140]: INFO : Stage: umount Dec 13 01:03:37.481928 ignition[1140]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 01:03:37.481928 ignition[1140]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Dec 13 01:03:37.481928 ignition[1140]: INFO : umount: umount passed Dec 13 01:03:37.481928 ignition[1140]: INFO : Ignition finished successfully Dec 13 01:03:37.396571 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:03:37.403127 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 01:03:37.403366 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 01:03:37.409587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 01:03:37.409776 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 01:03:37.414714 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 01:03:37.414910 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 01:03:37.423126 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 01:03:37.425960 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Dec 13 01:03:37.453656 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 01:03:37.476745 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 01:03:37.481945 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 01:03:37.482201 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:03:37.533102 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 01:03:37.533342 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 01:03:37.544035 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 01:03:37.544176 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 01:03:37.553647 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 01:03:37.555160 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 01:03:37.555479 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 01:03:37.564577 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 01:03:37.564673 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 01:03:37.572319 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 01:03:37.572416 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 01:03:37.577616 systemd[1]: Stopped target network.target - Network. Dec 13 01:03:37.584895 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 01:03:37.585010 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 01:03:37.594120 systemd[1]: Stopped target paths.target - Path Units. Dec 13 01:03:37.596640 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 01:03:37.601706 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:03:37.605241 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 01:03:37.607616 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 01:03:37.610147 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 01:03:37.610229 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 01:03:37.617132 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 01:03:37.617200 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 01:03:37.624765 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 01:03:37.627052 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 01:03:37.632166 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 01:03:37.632263 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 01:03:37.640546 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 01:03:37.644081 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 01:03:37.652627 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 01:03:37.652734 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 01:03:37.669765 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 01:03:37.669888 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 01:03:37.672317 systemd-networkd[902]: eth0: DHCPv6 lease lost Dec 13 01:03:37.677278 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 01:03:37.677407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 01:03:37.681858 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 01:03:37.681940 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:03:37.708490 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 01:03:37.715640 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 01:03:37.715772 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 01:03:37.726704 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 01:03:37.726817 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:03:37.732581 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 01:03:37.733064 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 01:03:37.746303 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 01:03:37.746432 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:03:37.757804 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:03:37.778034 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 01:03:37.778248 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:03:37.788519 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 01:03:37.788597 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 01:03:37.797051 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 01:03:37.797122 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:03:37.805461 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 01:03:37.805566 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 01:03:37.813873 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 01:03:37.813981 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 01:03:37.826975 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: Data path switched from VF: enP26553s1 Dec 13 01:03:37.824063 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 01:03:37.824159 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 01:03:37.839516 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 01:03:37.842658 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 01:03:37.842741 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:03:37.849228 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 01:03:37.849297 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:03:37.859141 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 01:03:37.859343 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:03:37.865642 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:37.865708 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:37.866197 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 01:03:37.866833 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 01:03:37.893933 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 01:03:37.894111 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 01:03:38.126500 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 01:03:38.126648 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 01:03:38.134918 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 01:03:38.140963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 01:03:38.141076 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 01:03:38.157598 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 01:03:38.249657 systemd[1]: Switching root. Dec 13 01:03:38.287494 systemd-journald[176]: Journal stopped Dec 13 01:03:43.725548 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Dec 13 01:03:43.725589 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 01:03:43.725604 kernel: SELinux: policy capability open_perms=1 Dec 13 01:03:43.725614 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 01:03:43.725622 kernel: SELinux: policy capability always_check_network=0 Dec 13 01:03:43.725633 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 01:03:43.725642 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 01:03:43.725656 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 01:03:43.725665 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 01:03:43.725674 kernel: audit: type=1403 audit(1734051820.528:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 01:03:43.725686 systemd[1]: Successfully loaded SELinux policy in 150.314ms. Dec 13 01:03:43.725697 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.779ms. Dec 13 01:03:43.725710 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 01:03:43.725722 systemd[1]: Detected virtualization microsoft. Dec 13 01:03:43.725736 systemd[1]: Detected architecture x86-64. Dec 13 01:03:43.725746 systemd[1]: Detected first boot. Dec 13 01:03:43.725759 systemd[1]: Hostname set to . Dec 13 01:03:43.725769 systemd[1]: Initializing machine ID from random generator. Dec 13 01:03:43.725781 zram_generator::config[1200]: No configuration found. Dec 13 01:03:43.725794 systemd[1]: Populated /etc with preset unit settings. Dec 13 01:03:43.725808 systemd[1]: Queued start job for default target multi-user.target. Dec 13 01:03:43.725818 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Dec 13 01:03:43.725831 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 01:03:43.725843 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 01:03:43.725856 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 01:03:43.725866 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 01:03:43.725882 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 01:03:43.725892 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 01:03:43.725905 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 01:03:43.725915 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 01:03:43.725928 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 01:03:43.725938 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 01:03:43.725951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 01:03:43.725966 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 01:03:43.725977 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 01:03:43.725989 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 01:03:43.726000 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 01:03:43.726009 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 01:03:43.726019 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 01:03:43.726032 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 01:03:43.726048 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 01:03:43.726058 systemd[1]: Reached target slices.target - Slice Units. Dec 13 01:03:43.726074 systemd[1]: Reached target swap.target - Swaps. Dec 13 01:03:43.726084 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 01:03:43.726098 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 01:03:43.726109 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 01:03:43.726119 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 01:03:43.726130 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 01:03:43.726142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 01:03:43.726157 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 01:03:43.726168 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 01:03:43.726181 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 01:03:43.726192 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 01:03:43.726202 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 01:03:43.726226 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:03:43.726236 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 01:03:43.726251 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 01:03:43.726261 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 01:03:43.726271 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 01:03:43.726285 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:03:43.726295 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 01:03:43.726306 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 01:03:43.726318 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:03:43.726329 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:03:43.726339 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:03:43.726354 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 01:03:43.726364 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:03:43.726375 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 01:03:43.726385 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 01:03:43.726398 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Dec 13 01:03:43.726413 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 01:03:43.726426 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 01:03:43.726437 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 01:03:43.726447 kernel: loop: module loaded Dec 13 01:03:43.726457 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 01:03:43.726467 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 01:03:43.726479 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:03:43.726491 kernel: fuse: init (API version 7.39) Dec 13 01:03:43.726500 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 01:03:43.726516 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 01:03:43.726527 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 01:03:43.726560 systemd-journald[1313]: Collecting audit messages is disabled. Dec 13 01:03:43.726589 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 01:03:43.726606 systemd-journald[1313]: Journal started Dec 13 01:03:43.726631 systemd-journald[1313]: Runtime Journal (/run/log/journal/175090a90e684207b9be1e775e4f15d0) is 8.0M, max 158.8M, 150.8M free. Dec 13 01:03:43.736516 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 01:03:43.746520 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 01:03:43.750059 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 01:03:43.753631 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 01:03:43.758825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 01:03:43.765705 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 01:03:43.766964 kernel: ACPI: bus type drm_connector registered Dec 13 01:03:43.766766 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 01:03:43.772877 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:03:43.773183 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:03:43.777122 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:03:43.777443 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:03:43.780612 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:03:43.780976 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:03:43.785952 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 01:03:43.786265 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 01:03:43.795826 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:03:43.796117 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:03:43.803064 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 01:03:43.806844 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 01:03:43.812442 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 01:03:43.833695 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 01:03:43.847539 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 01:03:43.856654 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 01:03:43.861718 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 01:03:43.902465 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 01:03:43.908039 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 01:03:43.911513 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:03:43.914801 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 01:03:43.918189 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:03:43.920447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 01:03:43.927410 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 01:03:43.933676 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 01:03:43.937645 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 01:03:43.945589 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 01:03:43.964592 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 01:03:43.975414 systemd-journald[1313]: Time spent on flushing to /var/log/journal/175090a90e684207b9be1e775e4f15d0 is 21.146ms for 951 entries. Dec 13 01:03:43.975414 systemd-journald[1313]: System Journal (/var/log/journal/175090a90e684207b9be1e775e4f15d0) is 8.0M, max 2.6G, 2.6G free. Dec 13 01:03:44.047029 systemd-journald[1313]: Received client request to flush runtime journal. Dec 13 01:03:43.977688 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 01:03:43.984419 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 01:03:44.003373 udevadm[1366]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 01:03:44.039821 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Dec 13 01:03:44.039854 systemd-tmpfiles[1360]: ACLs are not supported, ignoring. Dec 13 01:03:44.050184 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 01:03:44.059882 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 01:03:44.077506 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 01:03:44.086993 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 01:03:44.258633 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 01:03:44.268551 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 01:03:44.293765 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 13 01:03:44.293793 systemd-tmpfiles[1381]: ACLs are not supported, ignoring. Dec 13 01:03:44.301173 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 01:03:45.348950 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 01:03:45.358502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 01:03:45.387025 systemd-udevd[1387]: Using default interface naming scheme 'v255'. Dec 13 01:03:45.630595 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 01:03:45.643162 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 01:03:45.724511 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 01:03:45.735356 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1402) Dec 13 01:03:45.786240 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1402) Dec 13 01:03:45.789200 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Dec 13 01:03:45.821234 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 01:03:45.847992 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 01:03:45.922233 kernel: hv_vmbus: registering driver hv_balloon Dec 13 01:03:45.922331 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Dec 13 01:03:45.951235 kernel: hv_vmbus: registering driver hyperv_fb Dec 13 01:03:45.975232 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Dec 13 01:03:45.981727 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Dec 13 01:03:45.990485 kernel: Console: switching to colour dummy device 80x25 Dec 13 01:03:45.996231 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 01:03:46.137582 systemd-networkd[1391]: lo: Link UP Dec 13 01:03:46.141250 systemd-networkd[1391]: lo: Gained carrier Dec 13 01:03:46.150821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:46.155540 systemd-networkd[1391]: Enumeration completed Dec 13 01:03:46.156034 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:46.156039 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:03:46.157011 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 01:03:46.179503 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 01:03:46.233290 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Dec 13 01:03:46.236232 kernel: mlx5_core 67b9:00:02.0 enP26553s1: Link up Dec 13 01:03:46.261247 kernel: hv_netvsc 7c1e5220-fcd4-7c1e-5220-fcd47c1e5220 eth0: Data path switched to VF: enP26553s1 Dec 13 01:03:46.265961 systemd-networkd[1391]: enP26553s1: Link UP Dec 13 01:03:46.267593 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 01:03:46.267862 systemd-networkd[1391]: eth0: Link UP Dec 13 01:03:46.268674 systemd-networkd[1391]: eth0: Gained carrier Dec 13 01:03:46.268911 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:46.271922 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:46.281801 systemd-networkd[1391]: enP26553s1: Gained carrier Dec 13 01:03:46.289821 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 01:03:46.315377 systemd-networkd[1391]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:03:46.336243 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1394) Dec 13 01:03:46.402012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Dec 13 01:03:46.402739 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 01:03:46.426452 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 01:03:46.487571 lvm[1478]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:03:46.520761 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 01:03:46.524952 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 01:03:46.535441 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 01:03:46.545723 lvm[1481]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 01:03:46.571185 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 01:03:46.579422 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 01:03:46.579509 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 01:03:46.579531 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 01:03:46.579570 systemd[1]: Reached target machines.target - Containers. Dec 13 01:03:46.580910 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 01:03:46.594751 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 01:03:46.599975 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 01:03:46.603077 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:03:46.608518 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 01:03:46.613518 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 01:03:46.617529 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 01:03:46.627343 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 01:03:46.661981 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 01:03:46.662324 kernel: loop0: detected capacity change from 0 to 31056 Dec 13 01:03:46.691432 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 01:03:46.692622 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 01:03:46.780133 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 01:03:47.029687 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 01:03:47.090252 kernel: loop1: detected capacity change from 0 to 142488 Dec 13 01:03:47.557734 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 01:03:47.629274 kernel: loop3: detected capacity change from 0 to 140768 Dec 13 01:03:47.999238 kernel: loop4: detected capacity change from 0 to 31056 Dec 13 01:03:48.006242 kernel: loop5: detected capacity change from 0 to 142488 Dec 13 01:03:48.016370 systemd-networkd[1391]: enP26553s1: Gained IPv6LL Dec 13 01:03:48.021240 kernel: loop6: detected capacity change from 0 to 211296 Dec 13 01:03:48.030237 kernel: loop7: detected capacity change from 0 to 140768 Dec 13 01:03:48.038287 (sd-merge)[1506]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Dec 13 01:03:48.038965 (sd-merge)[1506]: Merged extensions into '/usr'. Dec 13 01:03:48.042681 systemd[1]: Reloading requested from client PID 1488 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 01:03:48.042699 systemd[1]: Reloading... Dec 13 01:03:48.130326 zram_generator::config[1537]: No configuration found. Dec 13 01:03:48.284948 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:03:48.336385 systemd-networkd[1391]: eth0: Gained IPv6LL Dec 13 01:03:48.378485 systemd[1]: Reloading finished in 335 ms. Dec 13 01:03:48.397517 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 01:03:48.401952 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 01:03:48.413455 systemd[1]: Starting ensure-sysext.service... Dec 13 01:03:48.418853 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 01:03:48.427231 systemd[1]: Reloading requested from client PID 1600 ('systemctl') (unit ensure-sysext.service)... Dec 13 01:03:48.427254 systemd[1]: Reloading... Dec 13 01:03:48.458409 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 01:03:48.458906 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 01:03:48.459760 systemd-tmpfiles[1601]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 01:03:48.460073 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Dec 13 01:03:48.460159 systemd-tmpfiles[1601]: ACLs are not supported, ignoring. Dec 13 01:03:48.490357 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:03:48.490378 systemd-tmpfiles[1601]: Skipping /boot Dec 13 01:03:48.506159 systemd-tmpfiles[1601]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 01:03:48.506180 systemd-tmpfiles[1601]: Skipping /boot Dec 13 01:03:48.528243 zram_generator::config[1629]: No configuration found. Dec 13 01:03:48.693665 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:03:48.769488 systemd[1]: Reloading finished in 341 ms. Dec 13 01:03:48.783108 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 01:03:48.800408 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:03:48.822457 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 01:03:48.836459 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 01:03:48.843506 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 01:03:48.849425 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 01:03:48.870450 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:03:48.870751 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:03:48.874649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:03:48.886814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:03:48.901746 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:03:48.905826 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:03:48.909369 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:03:48.910838 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:03:48.911121 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:03:48.924361 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:03:48.924621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:03:48.931428 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:03:48.933533 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:03:48.961493 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 01:03:48.983583 systemd[1]: Finished ensure-sysext.service. Dec 13 01:03:48.987811 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:03:48.988142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 01:03:48.993435 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 01:03:48.999554 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 01:03:49.013489 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 01:03:49.020362 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 01:03:49.023732 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 01:03:49.023835 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 01:03:49.040512 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 01:03:49.043541 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 01:03:49.047731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 01:03:49.048175 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 01:03:49.050888 systemd-resolved[1703]: Positive Trust Anchors: Dec 13 01:03:49.051395 systemd-resolved[1703]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 01:03:49.051453 systemd-resolved[1703]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 01:03:49.054596 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 01:03:49.054921 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 01:03:49.058297 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 01:03:49.058584 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 01:03:49.062162 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 01:03:49.062499 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 01:03:49.071770 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 01:03:49.071940 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 01:03:49.081732 systemd-resolved[1703]: Using system hostname 'ci-4081.2.1-a-672c6884da'. Dec 13 01:03:49.084825 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 01:03:49.088315 systemd[1]: Reached target network.target - Network. Dec 13 01:03:49.090907 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 01:03:49.093724 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 01:03:49.107606 augenrules[1746]: No rules Dec 13 01:03:49.109887 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:03:49.871317 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 01:03:49.876265 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 01:03:51.950277 ldconfig[1485]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 01:03:51.969942 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 01:03:51.988476 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 01:03:52.003764 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 01:03:52.008526 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 01:03:52.012844 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 01:03:52.016662 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 01:03:52.020350 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 01:03:52.023464 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 01:03:52.026848 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 01:03:52.030392 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 01:03:52.030461 systemd[1]: Reached target paths.target - Path Units. Dec 13 01:03:52.033187 systemd[1]: Reached target timers.target - Timer Units. Dec 13 01:03:52.036944 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 01:03:52.041943 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 01:03:52.046727 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 01:03:52.052018 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 01:03:52.055404 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 01:03:52.058250 systemd[1]: Reached target basic.target - Basic System. Dec 13 01:03:52.061162 systemd[1]: System is tainted: cgroupsv1 Dec 13 01:03:52.061271 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:03:52.061313 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 01:03:52.064170 systemd[1]: Starting chronyd.service - NTP client/server... Dec 13 01:03:52.070379 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 01:03:52.086441 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 01:03:52.105507 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 01:03:52.107395 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 01:03:52.113822 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 01:03:52.117366 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 01:03:52.117442 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Dec 13 01:03:52.120669 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Dec 13 01:03:52.126523 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Dec 13 01:03:52.135377 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:03:52.144198 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 01:03:52.154435 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 01:03:52.168248 jq[1769]: false Dec 13 01:03:52.171988 KVP[1771]: KVP starting; pid is:1771 Dec 13 01:03:52.179350 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Dec 13 01:03:52.194477 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 01:03:52.198993 kernel: hv_utils: KVP IC version 4.0 Dec 13 01:03:52.198257 KVP[1771]: KVP LIC Version: 3.1 Dec 13 01:03:52.205550 (chronyd)[1762]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Dec 13 01:03:52.216463 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 01:03:52.246487 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 01:03:52.250442 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 01:03:52.254966 chronyd[1794]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Dec 13 01:03:52.258432 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 01:03:52.265870 extend-filesystems[1770]: Found loop4 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found loop5 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found loop6 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found loop7 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda1 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda2 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda3 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found usr Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda4 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda6 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda7 Dec 13 01:03:52.270143 extend-filesystems[1770]: Found sda9 Dec 13 01:03:52.270143 extend-filesystems[1770]: Checking size of /dev/sda9 Dec 13 01:03:52.364980 extend-filesystems[1770]: Old size kept for /dev/sda9 Dec 13 01:03:52.364980 extend-filesystems[1770]: Found sr0 Dec 13 01:03:52.351477 chronyd[1794]: Timezone right/UTC failed leap second check, ignoring Dec 13 01:03:52.280459 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 01:03:52.351756 chronyd[1794]: Loaded seccomp filter (level 2) Dec 13 01:03:52.292154 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 01:03:52.356578 dbus-daemon[1767]: [system] SELinux support is enabled Dec 13 01:03:52.292581 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 01:03:52.393089 jq[1798]: true Dec 13 01:03:52.308809 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 01:03:52.309165 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 01:03:52.350895 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 01:03:52.351259 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 01:03:52.392165 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 01:03:52.403498 systemd[1]: Started chronyd.service - NTP client/server. Dec 13 01:03:52.409105 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 01:03:52.409526 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 01:03:52.462020 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 01:03:52.469004 (ntainerd)[1818]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 01:03:52.494554 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 01:03:52.494634 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 01:03:52.499530 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 01:03:52.499569 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 01:03:52.512147 jq[1816]: true Dec 13 01:03:52.515679 tar[1807]: linux-amd64/helm Dec 13 01:03:52.521241 update_engine[1796]: I20241213 01:03:52.519666 1796 main.cc:92] Flatcar Update Engine starting Dec 13 01:03:52.542847 systemd[1]: Started update-engine.service - Update Engine. Dec 13 01:03:52.548764 update_engine[1796]: I20241213 01:03:52.548681 1796 update_check_scheduler.cc:74] Next update check in 6m34s Dec 13 01:03:52.552676 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 01:03:52.555443 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 01:03:52.576332 systemd-logind[1788]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 01:03:52.579936 systemd-logind[1788]: New seat seat0. Dec 13 01:03:52.583471 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 01:03:52.637974 coreos-metadata[1764]: Dec 13 01:03:52.634 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Dec 13 01:03:52.648945 coreos-metadata[1764]: Dec 13 01:03:52.643 INFO Fetch successful Dec 13 01:03:52.648945 coreos-metadata[1764]: Dec 13 01:03:52.643 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Dec 13 01:03:52.653820 coreos-metadata[1764]: Dec 13 01:03:52.653 INFO Fetch successful Dec 13 01:03:52.657354 coreos-metadata[1764]: Dec 13 01:03:52.657 INFO Fetching http://168.63.129.16/machine/8a8dc7a5-df06-43bc-8736-057ba131f2b3/02011072%2D3cd7%2D41fb%2Daa75%2D05b540c8344a.%5Fci%2D4081.2.1%2Da%2D672c6884da?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Dec 13 01:03:52.663142 coreos-metadata[1764]: Dec 13 01:03:52.662 INFO Fetch successful Dec 13 01:03:52.663142 coreos-metadata[1764]: Dec 13 01:03:52.662 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Dec 13 01:03:52.684266 coreos-metadata[1764]: Dec 13 01:03:52.683 INFO Fetch successful Dec 13 01:03:52.686592 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (1848) Dec 13 01:03:52.694757 bash[1857]: Updated "/home/core/.ssh/authorized_keys" Dec 13 01:03:52.716445 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 01:03:52.733161 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 01:03:52.817556 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 01:03:52.826793 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 01:03:53.086074 locksmithd[1839]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 01:03:53.130483 sshd_keygen[1799]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 01:03:53.215244 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 01:03:53.232904 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 01:03:53.250090 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Dec 13 01:03:53.296863 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 01:03:53.304399 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 01:03:53.322660 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 01:03:53.357545 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Dec 13 01:03:53.384152 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 01:03:53.409580 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 01:03:53.430134 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 01:03:53.436562 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 01:03:53.637866 tar[1807]: linux-amd64/LICENSE Dec 13 01:03:53.640532 tar[1807]: linux-amd64/README.md Dec 13 01:03:53.666674 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Dec 13 01:03:53.699166 containerd[1818]: time="2024-12-13T01:03:53.699037000Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Dec 13 01:03:53.741483 containerd[1818]: time="2024-12-13T01:03:53.740475900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.744826 containerd[1818]: time="2024-12-13T01:03:53.744477000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:03:53.745037 containerd[1818]: time="2024-12-13T01:03:53.745011800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 01:03:53.745189 containerd[1818]: time="2024-12-13T01:03:53.745169700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 01:03:53.745523 containerd[1818]: time="2024-12-13T01:03:53.745500300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 01:03:53.745617 containerd[1818]: time="2024-12-13T01:03:53.745602600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.745776 containerd[1818]: time="2024-12-13T01:03:53.745757200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:03:53.745841 containerd[1818]: time="2024-12-13T01:03:53.745825800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.746289 containerd[1818]: time="2024-12-13T01:03:53.746255600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.746387500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.746423800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.746449600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.746568800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.746842800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.747083400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.747111800Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.747226300Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 01:03:53.747463 containerd[1818]: time="2024-12-13T01:03:53.747299700Z" level=info msg="metadata content store policy set" policy=shared Dec 13 01:03:53.759943 containerd[1818]: time="2024-12-13T01:03:53.759169400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 01:03:53.759943 containerd[1818]: time="2024-12-13T01:03:53.759301800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 01:03:53.759943 containerd[1818]: time="2024-12-13T01:03:53.759330800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 01:03:53.759943 containerd[1818]: time="2024-12-13T01:03:53.759407000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 01:03:53.759943 containerd[1818]: time="2024-12-13T01:03:53.759441800Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 01:03:53.759943 containerd[1818]: time="2024-12-13T01:03:53.759690800Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 01:03:53.760564 containerd[1818]: time="2024-12-13T01:03:53.760312000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 01:03:53.760564 containerd[1818]: time="2024-12-13T01:03:53.760497000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 01:03:53.760564 containerd[1818]: time="2024-12-13T01:03:53.760534900Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 01:03:53.760564 containerd[1818]: time="2024-12-13T01:03:53.760558600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760579000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760597400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760617100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760637800Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760666800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760688800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760709600Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.760746 containerd[1818]: time="2024-12-13T01:03:53.760730800Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760773600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760797300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760817400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760838900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760857700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760879900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760899000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760919900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760939600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760963000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760980200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761004 containerd[1818]: time="2024-12-13T01:03:53.760998300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761017400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761053700Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761091400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761110700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761128200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761188200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761239000Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761257400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761275900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761290600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761311300Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761328800Z" level=info msg="NRI interface is disabled by configuration." Dec 13 01:03:53.761418 containerd[1818]: time="2024-12-13T01:03:53.761351600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 01:03:53.761856 containerd[1818]: time="2024-12-13T01:03:53.761774900Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 01:03:53.762105 containerd[1818]: time="2024-12-13T01:03:53.761875200Z" level=info msg="Connect containerd service" Dec 13 01:03:53.762105 containerd[1818]: time="2024-12-13T01:03:53.761973500Z" level=info msg="using legacy CRI server" Dec 13 01:03:53.762105 containerd[1818]: time="2024-12-13T01:03:53.761986100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 01:03:53.762245 containerd[1818]: time="2024-12-13T01:03:53.762146500Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763169300Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763646800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763707300Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763843000Z" level=info msg="Start subscribing containerd event" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763891600Z" level=info msg="Start recovering state" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763979600Z" level=info msg="Start event monitor" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.763999400Z" level=info msg="Start snapshots syncer" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.764014400Z" level=info msg="Start cni network conf syncer for default" Dec 13 01:03:53.765628 containerd[1818]: time="2024-12-13T01:03:53.764024500Z" level=info msg="Start streaming server" Dec 13 01:03:53.764319 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 01:03:53.767341 containerd[1818]: time="2024-12-13T01:03:53.767311200Z" level=info msg="containerd successfully booted in 0.069529s" Dec 13 01:03:53.953880 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:03:53.958119 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 01:03:53.961878 systemd[1]: Startup finished in 680ms (firmware) + 28.839s (loader) + 13.343s (kernel) + 13.579s (userspace) = 56.443s. Dec 13 01:03:53.966942 (kubelet)[1957]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:03:54.344861 login[1938]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying Dec 13 01:03:54.347931 login[1939]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:03:54.363172 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 01:03:54.371570 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 01:03:54.378171 systemd-logind[1788]: New session 1 of user core. Dec 13 01:03:54.398822 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 01:03:54.415078 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 01:03:54.438001 (systemd)[1970]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 01:03:54.680203 kubelet[1957]: E1213 01:03:54.680005 1957 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:03:54.687800 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:03:54.688051 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:03:54.755964 systemd[1970]: Queued start job for default target default.target. Dec 13 01:03:54.756556 systemd[1970]: Created slice app.slice - User Application Slice. Dec 13 01:03:54.756587 systemd[1970]: Reached target paths.target - Paths. Dec 13 01:03:54.756606 systemd[1970]: Reached target timers.target - Timers. Dec 13 01:03:54.761351 systemd[1970]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 01:03:54.771071 systemd[1970]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 01:03:54.771165 systemd[1970]: Reached target sockets.target - Sockets. Dec 13 01:03:54.771184 systemd[1970]: Reached target basic.target - Basic System. Dec 13 01:03:54.771254 systemd[1970]: Reached target default.target - Main User Target. Dec 13 01:03:54.771296 systemd[1970]: Startup finished in 322ms. Dec 13 01:03:54.772535 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 01:03:54.784614 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 01:03:55.167507 waagent[1932]: 2024-12-13T01:03:55.167367Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.168043Z INFO Daemon Daemon OS: flatcar 4081.2.1 Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.169325Z INFO Daemon Daemon Python: 3.11.9 Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.170026Z INFO Daemon Daemon Run daemon Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.170884Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.1' Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.171708Z INFO Daemon Daemon Using waagent for provisioning Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.172792Z INFO Daemon Daemon Activate resource disk Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.173149Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.177323Z INFO Daemon Daemon Found device: None Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.178085Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.179026Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.181974Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:03:55.205200 waagent[1932]: 2024-12-13T01:03:55.183055Z INFO Daemon Daemon Running default provisioning handler Dec 13 01:03:55.209573 waagent[1932]: 2024-12-13T01:03:55.209458Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Dec 13 01:03:55.217830 waagent[1932]: 2024-12-13T01:03:55.217730Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Dec 13 01:03:55.231582 waagent[1932]: 2024-12-13T01:03:55.223641Z INFO Daemon Daemon cloud-init is enabled: False Dec 13 01:03:55.231582 waagent[1932]: 2024-12-13T01:03:55.223871Z INFO Daemon Daemon Copying ovf-env.xml Dec 13 01:03:55.310241 waagent[1932]: 2024-12-13T01:03:55.307186Z INFO Daemon Daemon Successfully mounted dvd Dec 13 01:03:55.324355 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Dec 13 01:03:55.330595 waagent[1932]: 2024-12-13T01:03:55.326270Z INFO Daemon Daemon Detect protocol endpoint Dec 13 01:03:55.330595 waagent[1932]: 2024-12-13T01:03:55.326687Z INFO Daemon Daemon Clean protocol and wireserver endpoint Dec 13 01:03:55.330595 waagent[1932]: 2024-12-13T01:03:55.327710Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Dec 13 01:03:55.330595 waagent[1932]: 2024-12-13T01:03:55.328165Z INFO Daemon Daemon Test for route to 168.63.129.16 Dec 13 01:03:55.330595 waagent[1932]: 2024-12-13T01:03:55.328792Z INFO Daemon Daemon Route to 168.63.129.16 exists Dec 13 01:03:55.330595 waagent[1932]: 2024-12-13T01:03:55.329079Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Dec 13 01:03:55.345309 login[1938]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 01:03:55.350196 systemd-logind[1788]: New session 2 of user core. Dec 13 01:03:55.355607 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 01:03:55.359829 waagent[1932]: 2024-12-13T01:03:55.359730Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Dec 13 01:03:55.363301 waagent[1932]: 2024-12-13T01:03:55.361004Z INFO Daemon Daemon Wire protocol version:2012-11-30 Dec 13 01:03:55.368762 waagent[1932]: 2024-12-13T01:03:55.363688Z INFO Daemon Daemon Server preferred version:2015-04-05 Dec 13 01:03:55.442355 waagent[1932]: 2024-12-13T01:03:55.442117Z INFO Daemon Daemon Initializing goal state during protocol detection Dec 13 01:03:55.446681 waagent[1932]: 2024-12-13T01:03:55.446550Z INFO Daemon Daemon Forcing an update of the goal state. Dec 13 01:03:55.455092 waagent[1932]: 2024-12-13T01:03:55.455002Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:03:55.474156 waagent[1932]: 2024-12-13T01:03:55.474067Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Dec 13 01:03:55.492855 waagent[1932]: 2024-12-13T01:03:55.475072Z INFO Daemon Dec 13 01:03:55.492855 waagent[1932]: 2024-12-13T01:03:55.476131Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 41d20f42-0f0b-4e3f-8a4e-501739c444bc eTag: 17644256499117410756 source: Fabric] Dec 13 01:03:55.492855 waagent[1932]: 2024-12-13T01:03:55.477438Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Dec 13 01:03:55.492855 waagent[1932]: 2024-12-13T01:03:55.478501Z INFO Daemon Dec 13 01:03:55.492855 waagent[1932]: 2024-12-13T01:03:55.478879Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:03:55.492855 waagent[1932]: 2024-12-13T01:03:55.484366Z INFO Daemon Daemon Downloading artifacts profile blob Dec 13 01:03:55.581948 waagent[1932]: 2024-12-13T01:03:55.581830Z INFO Daemon Downloaded certificate {'thumbprint': 'CFF1821AD9DF8D9EC477AD7E6A581D42321FA8DE', 'hasPrivateKey': False} Dec 13 01:03:55.587331 waagent[1932]: 2024-12-13T01:03:55.587246Z INFO Daemon Downloaded certificate {'thumbprint': '7A4CCB54959A96CE8C63FE72E9F3C212ADC002AD', 'hasPrivateKey': True} Dec 13 01:03:55.592781 waagent[1932]: 2024-12-13T01:03:55.592700Z INFO Daemon Fetch goal state completed Dec 13 01:03:55.603739 waagent[1932]: 2024-12-13T01:03:55.603625Z INFO Daemon Daemon Starting provisioning Dec 13 01:03:55.611293 waagent[1932]: 2024-12-13T01:03:55.604519Z INFO Daemon Daemon Handle ovf-env.xml. Dec 13 01:03:55.611293 waagent[1932]: 2024-12-13T01:03:55.606373Z INFO Daemon Daemon Set hostname [ci-4081.2.1-a-672c6884da] Dec 13 01:03:55.625307 waagent[1932]: 2024-12-13T01:03:55.625180Z INFO Daemon Daemon Publish hostname [ci-4081.2.1-a-672c6884da] Dec 13 01:03:55.633870 waagent[1932]: 2024-12-13T01:03:55.625852Z INFO Daemon Daemon Examine /proc/net/route for primary interface Dec 13 01:03:55.633870 waagent[1932]: 2024-12-13T01:03:55.626370Z INFO Daemon Daemon Primary interface is [eth0] Dec 13 01:03:55.670121 systemd-networkd[1391]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 01:03:55.670135 systemd-networkd[1391]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 01:03:55.670203 systemd-networkd[1391]: eth0: DHCP lease lost Dec 13 01:03:55.672117 waagent[1932]: 2024-12-13T01:03:55.672000Z INFO Daemon Daemon Create user account if not exists Dec 13 01:03:55.689366 waagent[1932]: 2024-12-13T01:03:55.672591Z INFO Daemon Daemon User core already exists, skip useradd Dec 13 01:03:55.689366 waagent[1932]: 2024-12-13T01:03:55.674112Z INFO Daemon Daemon Configure sudoer Dec 13 01:03:55.689366 waagent[1932]: 2024-12-13T01:03:55.675361Z INFO Daemon Daemon Configure sshd Dec 13 01:03:55.689366 waagent[1932]: 2024-12-13T01:03:55.676558Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Dec 13 01:03:55.689366 waagent[1932]: 2024-12-13T01:03:55.677227Z INFO Daemon Daemon Deploy ssh public key. Dec 13 01:03:55.689482 systemd-networkd[1391]: eth0: DHCPv6 lease lost Dec 13 01:03:55.720335 systemd-networkd[1391]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Dec 13 01:03:56.784045 waagent[1932]: 2024-12-13T01:03:56.783942Z INFO Daemon Daemon Provisioning complete Dec 13 01:03:56.798244 waagent[1932]: 2024-12-13T01:03:56.798060Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Dec 13 01:03:56.808521 waagent[1932]: 2024-12-13T01:03:56.798765Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Dec 13 01:03:56.808521 waagent[1932]: 2024-12-13T01:03:56.800327Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Dec 13 01:03:56.945522 waagent[2031]: 2024-12-13T01:03:56.945386Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Dec 13 01:03:56.946091 waagent[2031]: 2024-12-13T01:03:56.945609Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.1 Dec 13 01:03:56.946091 waagent[2031]: 2024-12-13T01:03:56.945696Z INFO ExtHandler ExtHandler Python: 3.11.9 Dec 13 01:03:56.983425 waagent[2031]: 2024-12-13T01:03:56.983295Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.1; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Dec 13 01:03:56.983715 waagent[2031]: 2024-12-13T01:03:56.983656Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:03:56.983821 waagent[2031]: 2024-12-13T01:03:56.983773Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:03:56.993560 waagent[2031]: 2024-12-13T01:03:56.993435Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Dec 13 01:03:56.999760 waagent[2031]: 2024-12-13T01:03:56.999688Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Dec 13 01:03:57.002627 waagent[2031]: 2024-12-13T01:03:57.002550Z INFO ExtHandler Dec 13 01:03:57.002772 waagent[2031]: 2024-12-13T01:03:57.002697Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: c8071800-1afa-48de-b771-07a0fe0a7757 eTag: 17644256499117410756 source: Fabric] Dec 13 01:03:57.003177 waagent[2031]: 2024-12-13T01:03:57.003118Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Dec 13 01:03:57.003834 waagent[2031]: 2024-12-13T01:03:57.003776Z INFO ExtHandler Dec 13 01:03:57.003917 waagent[2031]: 2024-12-13T01:03:57.003864Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Dec 13 01:03:57.007884 waagent[2031]: 2024-12-13T01:03:57.007836Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Dec 13 01:03:57.100637 waagent[2031]: 2024-12-13T01:03:57.100440Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFF1821AD9DF8D9EC477AD7E6A581D42321FA8DE', 'hasPrivateKey': False} Dec 13 01:03:57.101123 waagent[2031]: 2024-12-13T01:03:57.101063Z INFO ExtHandler Downloaded certificate {'thumbprint': '7A4CCB54959A96CE8C63FE72E9F3C212ADC002AD', 'hasPrivateKey': True} Dec 13 01:03:57.101656 waagent[2031]: 2024-12-13T01:03:57.101603Z INFO ExtHandler Fetch goal state completed Dec 13 01:03:57.119662 waagent[2031]: 2024-12-13T01:03:57.119564Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 2031 Dec 13 01:03:57.119880 waagent[2031]: 2024-12-13T01:03:57.119826Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Dec 13 01:03:57.121698 waagent[2031]: 2024-12-13T01:03:57.121631Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.1', '', 'Flatcar Container Linux by Kinvolk'] Dec 13 01:03:57.122104 waagent[2031]: 2024-12-13T01:03:57.122052Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Dec 13 01:03:57.167395 waagent[2031]: 2024-12-13T01:03:57.167330Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Dec 13 01:03:57.167731 waagent[2031]: 2024-12-13T01:03:57.167668Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Dec 13 01:03:57.176368 waagent[2031]: 2024-12-13T01:03:57.176320Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Dec 13 01:03:57.185101 systemd[1]: Reloading requested from client PID 2046 ('systemctl') (unit waagent.service)... Dec 13 01:03:57.185121 systemd[1]: Reloading... Dec 13 01:03:57.283336 zram_generator::config[2083]: No configuration found. Dec 13 01:03:57.423362 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:03:57.501285 systemd[1]: Reloading finished in 315 ms. Dec 13 01:03:57.530116 waagent[2031]: 2024-12-13T01:03:57.528553Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Dec 13 01:03:57.537635 systemd[1]: Reloading requested from client PID 2142 ('systemctl') (unit waagent.service)... Dec 13 01:03:57.537655 systemd[1]: Reloading... Dec 13 01:03:57.630237 zram_generator::config[2172]: No configuration found. Dec 13 01:03:57.768487 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:03:57.846431 systemd[1]: Reloading finished in 308 ms. Dec 13 01:03:57.873293 waagent[2031]: 2024-12-13T01:03:57.871126Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Dec 13 01:03:57.873293 waagent[2031]: 2024-12-13T01:03:57.872488Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Dec 13 01:03:58.262753 waagent[2031]: 2024-12-13T01:03:58.262624Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Dec 13 01:03:58.263748 waagent[2031]: 2024-12-13T01:03:58.263662Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Dec 13 01:03:58.264822 waagent[2031]: 2024-12-13T01:03:58.264756Z INFO ExtHandler ExtHandler Starting env monitor service. Dec 13 01:03:58.265046 waagent[2031]: 2024-12-13T01:03:58.264986Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:03:58.265623 waagent[2031]: 2024-12-13T01:03:58.265556Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Dec 13 01:03:58.265797 waagent[2031]: 2024-12-13T01:03:58.265729Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Dec 13 01:03:58.265908 waagent[2031]: 2024-12-13T01:03:58.265850Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:03:58.266428 waagent[2031]: 2024-12-13T01:03:58.266355Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Dec 13 01:03:58.266775 waagent[2031]: 2024-12-13T01:03:58.266703Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Dec 13 01:03:58.266982 waagent[2031]: 2024-12-13T01:03:58.266843Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Dec 13 01:03:58.266982 waagent[2031]: 2024-12-13T01:03:58.266917Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Dec 13 01:03:58.267451 waagent[2031]: 2024-12-13T01:03:58.267391Z INFO EnvHandler ExtHandler Configure routes Dec 13 01:03:58.267941 waagent[2031]: 2024-12-13T01:03:58.267874Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Dec 13 01:03:58.268005 waagent[2031]: 2024-12-13T01:03:58.267963Z INFO EnvHandler ExtHandler Gateway:None Dec 13 01:03:58.268233 waagent[2031]: 2024-12-13T01:03:58.268174Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Dec 13 01:03:58.268233 waagent[2031]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Dec 13 01:03:58.268233 waagent[2031]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Dec 13 01:03:58.268233 waagent[2031]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Dec 13 01:03:58.268233 waagent[2031]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:03:58.268233 waagent[2031]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:03:58.268233 waagent[2031]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Dec 13 01:03:58.268664 waagent[2031]: 2024-12-13T01:03:58.268588Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Dec 13 01:03:58.268905 waagent[2031]: 2024-12-13T01:03:58.268843Z INFO EnvHandler ExtHandler Routes:None Dec 13 01:03:58.268986 waagent[2031]: 2024-12-13T01:03:58.268935Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Dec 13 01:03:58.277195 waagent[2031]: 2024-12-13T01:03:58.277103Z INFO ExtHandler ExtHandler Dec 13 01:03:58.277387 waagent[2031]: 2024-12-13T01:03:58.277334Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 26f29a31-20be-429f-8b6f-35945658c0d4 correlation 19a3f8a1-95be-4e76-8e68-86b7a0d07e9f created: 2024-12-13T01:02:47.588966Z] Dec 13 01:03:58.277807 waagent[2031]: 2024-12-13T01:03:58.277756Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Dec 13 01:03:58.278435 waagent[2031]: 2024-12-13T01:03:58.278390Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Dec 13 01:03:58.352811 waagent[2031]: 2024-12-13T01:03:58.352593Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 864490AC-984D-4B7A-A4C7-EA3CD462E740;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Dec 13 01:03:58.365440 waagent[2031]: 2024-12-13T01:03:58.365333Z INFO MonitorHandler ExtHandler Network interfaces: Dec 13 01:03:58.365440 waagent[2031]: Executing ['ip', '-a', '-o', 'link']: Dec 13 01:03:58.365440 waagent[2031]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Dec 13 01:03:58.365440 waagent[2031]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:fc:d4 brd ff:ff:ff:ff:ff:ff Dec 13 01:03:58.365440 waagent[2031]: 3: enP26553s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:20:fc:d4 brd ff:ff:ff:ff:ff:ff\ altname enP26553p0s2 Dec 13 01:03:58.365440 waagent[2031]: Executing ['ip', '-4', '-a', '-o', 'address']: Dec 13 01:03:58.365440 waagent[2031]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Dec 13 01:03:58.365440 waagent[2031]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Dec 13 01:03:58.365440 waagent[2031]: Executing ['ip', '-6', '-a', '-o', 'address']: Dec 13 01:03:58.365440 waagent[2031]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Dec 13 01:03:58.365440 waagent[2031]: 2: eth0 inet6 fe80::7e1e:52ff:fe20:fcd4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:03:58.365440 waagent[2031]: 3: enP26553s1 inet6 fe80::7e1e:52ff:fe20:fcd4/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Dec 13 01:03:58.462721 waagent[2031]: 2024-12-13T01:03:58.462603Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Dec 13 01:03:58.462721 waagent[2031]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:03:58.462721 waagent[2031]: pkts bytes target prot opt in out source destination Dec 13 01:03:58.462721 waagent[2031]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:03:58.462721 waagent[2031]: pkts bytes target prot opt in out source destination Dec 13 01:03:58.462721 waagent[2031]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:03:58.462721 waagent[2031]: pkts bytes target prot opt in out source destination Dec 13 01:03:58.462721 waagent[2031]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:03:58.462721 waagent[2031]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:03:58.462721 waagent[2031]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:03:58.467134 waagent[2031]: 2024-12-13T01:03:58.467043Z INFO EnvHandler ExtHandler Current Firewall rules: Dec 13 01:03:58.467134 waagent[2031]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:03:58.467134 waagent[2031]: pkts bytes target prot opt in out source destination Dec 13 01:03:58.467134 waagent[2031]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:03:58.467134 waagent[2031]: pkts bytes target prot opt in out source destination Dec 13 01:03:58.467134 waagent[2031]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Dec 13 01:03:58.467134 waagent[2031]: pkts bytes target prot opt in out source destination Dec 13 01:03:58.467134 waagent[2031]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Dec 13 01:03:58.467134 waagent[2031]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Dec 13 01:03:58.467134 waagent[2031]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Dec 13 01:03:58.467648 waagent[2031]: 2024-12-13T01:03:58.467506Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Dec 13 01:04:04.759137 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 01:04:04.764524 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:04:05.031464 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:04:05.039755 (kubelet)[2281]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:04:05.410689 kubelet[2281]: E1213 01:04:05.410495 2281 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:04:05.415784 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:04:05.416168 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:04:15.509309 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 01:04:15.516483 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:04:15.903418 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:04:15.908323 (kubelet)[2302]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:04:16.148177 chronyd[1794]: Selected source PHC0 Dec 13 01:04:16.158543 kubelet[2302]: E1213 01:04:16.158338 2302 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:04:16.162011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:04:16.163158 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:04:24.458925 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 01:04:24.471556 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.16.10:58040.service - OpenSSH per-connection server daemon (10.200.16.10:58040). Dec 13 01:04:26.172717 sshd[2311]: Accepted publickey for core from 10.200.16.10 port 58040 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:26.174644 sshd[2311]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:26.176009 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Dec 13 01:04:26.183455 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:04:26.190279 systemd-logind[1788]: New session 3 of user core. Dec 13 01:04:26.200448 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 01:04:26.591482 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:04:26.594958 (kubelet)[2327]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:04:26.732587 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.16.10:58052.service - OpenSSH per-connection server daemon (10.200.16.10:58052). Dec 13 01:04:26.794030 kubelet[2327]: E1213 01:04:26.793957 2327 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:04:26.797496 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:04:26.797861 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:04:27.367809 sshd[2334]: Accepted publickey for core from 10.200.16.10 port 58052 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:27.370330 sshd[2334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:27.376430 systemd-logind[1788]: New session 4 of user core. Dec 13 01:04:27.384542 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 01:04:27.823517 sshd[2334]: pam_unix(sshd:session): session closed for user core Dec 13 01:04:27.830124 systemd[1]: sshd@1-10.200.8.40:22-10.200.16.10:58052.service: Deactivated successfully. Dec 13 01:04:27.836494 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 01:04:27.837623 systemd-logind[1788]: Session 4 logged out. Waiting for processes to exit. Dec 13 01:04:27.839200 systemd-logind[1788]: Removed session 4. Dec 13 01:04:27.935987 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.16.10:58060.service - OpenSSH per-connection server daemon (10.200.16.10:58060). Dec 13 01:04:28.569970 sshd[2345]: Accepted publickey for core from 10.200.16.10 port 58060 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:28.571975 sshd[2345]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:28.576859 systemd-logind[1788]: New session 5 of user core. Dec 13 01:04:28.586532 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 01:04:29.019172 sshd[2345]: pam_unix(sshd:session): session closed for user core Dec 13 01:04:29.023987 systemd[1]: sshd@2-10.200.8.40:22-10.200.16.10:58060.service: Deactivated successfully. Dec 13 01:04:29.029582 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 01:04:29.030406 systemd-logind[1788]: Session 5 logged out. Waiting for processes to exit. Dec 13 01:04:29.031604 systemd-logind[1788]: Removed session 5. Dec 13 01:04:29.129940 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.16.10:51378.service - OpenSSH per-connection server daemon (10.200.16.10:51378). Dec 13 01:04:29.765792 sshd[2353]: Accepted publickey for core from 10.200.16.10 port 51378 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:29.767611 sshd[2353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:29.772072 systemd-logind[1788]: New session 6 of user core. Dec 13 01:04:29.783499 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 01:04:30.224254 sshd[2353]: pam_unix(sshd:session): session closed for user core Dec 13 01:04:30.230078 systemd[1]: sshd@3-10.200.8.40:22-10.200.16.10:51378.service: Deactivated successfully. Dec 13 01:04:30.233713 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 01:04:30.234343 systemd-logind[1788]: Session 6 logged out. Waiting for processes to exit. Dec 13 01:04:30.235423 systemd-logind[1788]: Removed session 6. Dec 13 01:04:30.334979 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.16.10:51386.service - OpenSSH per-connection server daemon (10.200.16.10:51386). Dec 13 01:04:30.969678 sshd[2361]: Accepted publickey for core from 10.200.16.10 port 51386 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:30.971440 sshd[2361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:30.975831 systemd-logind[1788]: New session 7 of user core. Dec 13 01:04:30.985568 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 01:04:31.462855 sudo[2365]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 01:04:31.463347 sudo[2365]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:04:31.494121 sudo[2365]: pam_unix(sudo:session): session closed for user root Dec 13 01:04:31.598630 sshd[2361]: pam_unix(sshd:session): session closed for user core Dec 13 01:04:31.605538 systemd[1]: sshd@4-10.200.8.40:22-10.200.16.10:51386.service: Deactivated successfully. Dec 13 01:04:31.609683 systemd-logind[1788]: Session 7 logged out. Waiting for processes to exit. Dec 13 01:04:31.610006 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 01:04:31.611689 systemd-logind[1788]: Removed session 7. Dec 13 01:04:31.708126 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.16.10:51402.service - OpenSSH per-connection server daemon (10.200.16.10:51402). Dec 13 01:04:32.356725 sshd[2370]: Accepted publickey for core from 10.200.16.10 port 51402 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:32.358886 sshd[2370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:32.365223 systemd-logind[1788]: New session 8 of user core. Dec 13 01:04:32.375557 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 01:04:32.710070 sudo[2375]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 01:04:32.710538 sudo[2375]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:04:32.715381 sudo[2375]: pam_unix(sudo:session): session closed for user root Dec 13 01:04:32.722329 sudo[2374]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Dec 13 01:04:32.722785 sudo[2374]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:04:32.743616 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Dec 13 01:04:32.745739 auditctl[2378]: No rules Dec 13 01:04:32.746440 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 01:04:32.746834 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Dec 13 01:04:32.757005 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Dec 13 01:04:32.788228 augenrules[2397]: No rules Dec 13 01:04:32.791416 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Dec 13 01:04:32.795163 sudo[2374]: pam_unix(sudo:session): session closed for user root Dec 13 01:04:32.902855 sshd[2370]: pam_unix(sshd:session): session closed for user core Dec 13 01:04:32.908853 systemd[1]: sshd@5-10.200.8.40:22-10.200.16.10:51402.service: Deactivated successfully. Dec 13 01:04:32.915728 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 01:04:32.917501 systemd-logind[1788]: Session 8 logged out. Waiting for processes to exit. Dec 13 01:04:32.918946 systemd-logind[1788]: Removed session 8. Dec 13 01:04:33.012931 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.16.10:51414.service - OpenSSH per-connection server daemon (10.200.16.10:51414). Dec 13 01:04:33.648732 sshd[2406]: Accepted publickey for core from 10.200.16.10 port 51414 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:04:33.650750 sshd[2406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:04:33.655991 systemd-logind[1788]: New session 9 of user core. Dec 13 01:04:33.666683 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 01:04:33.999731 sudo[2410]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 01:04:34.000114 sudo[2410]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 01:04:34.023839 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Dec 13 01:04:35.941599 systemd[1]: Starting docker.service - Docker Application Container Engine... Dec 13 01:04:35.943850 (dockerd)[2426]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Dec 13 01:04:37.009082 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Dec 13 01:04:37.014887 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:04:38.042904 update_engine[1796]: I20241213 01:04:38.042760 1796 update_attempter.cc:509] Updating boot flags... Dec 13 01:04:38.470358 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2442) Dec 13 01:04:38.680588 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2443) Dec 13 01:04:38.856232 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 41 scanned by (udev-worker) (2443) Dec 13 01:04:40.840522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:04:40.845226 (kubelet)[2536]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:04:40.905035 kubelet[2536]: E1213 01:04:40.903539 2536 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:04:40.907008 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:04:40.907275 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:04:41.506118 dockerd[2426]: time="2024-12-13T01:04:41.506036251Z" level=info msg="Starting up" Dec 13 01:04:44.567457 dockerd[2426]: time="2024-12-13T01:04:44.567041180Z" level=info msg="Loading containers: start." Dec 13 01:04:44.856369 kernel: Initializing XFRM netlink socket Dec 13 01:04:44.983341 systemd-networkd[1391]: docker0: Link UP Dec 13 01:04:45.037749 dockerd[2426]: time="2024-12-13T01:04:45.037693853Z" level=info msg="Loading containers: done." Dec 13 01:04:45.562101 dockerd[2426]: time="2024-12-13T01:04:45.562018129Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 01:04:45.562603 dockerd[2426]: time="2024-12-13T01:04:45.562203933Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Dec 13 01:04:45.562603 dockerd[2426]: time="2024-12-13T01:04:45.562432537Z" level=info msg="Daemon has completed initialization" Dec 13 01:04:45.824765 dockerd[2426]: time="2024-12-13T01:04:45.822876290Z" level=info msg="API listen on /run/docker.sock" Dec 13 01:04:45.823328 systemd[1]: Started docker.service - Docker Application Container Engine. Dec 13 01:04:47.844782 containerd[1818]: time="2024-12-13T01:04:47.844728244Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 01:04:48.535291 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount712832012.mount: Deactivated successfully. Dec 13 01:04:50.427686 containerd[1818]: time="2024-12-13T01:04:50.427609506Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:50.431546 containerd[1818]: time="2024-12-13T01:04:50.431467596Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.12: active requests=0, bytes read=35139262" Dec 13 01:04:50.434385 containerd[1818]: time="2024-12-13T01:04:50.434316063Z" level=info msg="ImageCreate event name:\"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:50.439789 containerd[1818]: time="2024-12-13T01:04:50.439748590Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:50.441551 containerd[1818]: time="2024-12-13T01:04:50.440742513Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.12\" with image id \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26\", size \"35136054\" in 2.595962168s" Dec 13 01:04:50.441551 containerd[1818]: time="2024-12-13T01:04:50.440795814Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 01:04:50.466398 containerd[1818]: time="2024-12-13T01:04:50.466359312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 01:04:51.009096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Dec 13 01:04:51.016488 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:04:51.156853 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:04:51.162725 (kubelet)[2753]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:04:51.637614 kubelet[2753]: E1213 01:04:51.637508 2753 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:04:51.641051 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:04:51.642122 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:04:52.782181 containerd[1818]: time="2024-12-13T01:04:52.782107429Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:52.784716 containerd[1818]: time="2024-12-13T01:04:52.784631988Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.12: active requests=0, bytes read=32217740" Dec 13 01:04:52.787805 containerd[1818]: time="2024-12-13T01:04:52.787742261Z" level=info msg="ImageCreate event name:\"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:52.793386 containerd[1818]: time="2024-12-13T01:04:52.793321591Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:52.794572 containerd[1818]: time="2024-12-13T01:04:52.794390616Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.12\" with image id \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412\", size \"33662844\" in 2.327985003s" Dec 13 01:04:52.794572 containerd[1818]: time="2024-12-13T01:04:52.794438617Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 01:04:52.821623 containerd[1818]: time="2024-12-13T01:04:52.821568451Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 01:04:54.084465 containerd[1818]: time="2024-12-13T01:04:54.084397163Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:54.086549 containerd[1818]: time="2024-12-13T01:04:54.086462511Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.12: active requests=0, bytes read=17332830" Dec 13 01:04:54.094574 containerd[1818]: time="2024-12-13T01:04:54.094523999Z" level=info msg="ImageCreate event name:\"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:54.099840 containerd[1818]: time="2024-12-13T01:04:54.099761922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:54.104231 containerd[1818]: time="2024-12-13T01:04:54.102838394Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.12\" with image id \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c\", size \"18777952\" in 1.281213041s" Dec 13 01:04:54.104231 containerd[1818]: time="2024-12-13T01:04:54.102893095Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 01:04:54.131549 containerd[1818]: time="2024-12-13T01:04:54.131488863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 01:04:55.539587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3170336698.mount: Deactivated successfully. Dec 13 01:04:56.061573 containerd[1818]: time="2024-12-13T01:04:56.061489677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:56.064949 containerd[1818]: time="2024-12-13T01:04:56.064857046Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619966" Dec 13 01:04:56.068460 containerd[1818]: time="2024-12-13T01:04:56.068395119Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:56.073046 containerd[1818]: time="2024-12-13T01:04:56.072980513Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:56.074164 containerd[1818]: time="2024-12-13T01:04:56.073642827Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.942101862s" Dec 13 01:04:56.074164 containerd[1818]: time="2024-12-13T01:04:56.073689827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 01:04:56.099749 containerd[1818]: time="2024-12-13T01:04:56.099699061Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 01:04:56.836883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2198988677.mount: Deactivated successfully. Dec 13 01:04:58.117420 containerd[1818]: time="2024-12-13T01:04:58.117347048Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:58.120679 containerd[1818]: time="2024-12-13T01:04:58.120590730Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Dec 13 01:04:58.124266 containerd[1818]: time="2024-12-13T01:04:58.124182020Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:58.130943 containerd[1818]: time="2024-12-13T01:04:58.130871089Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:58.132145 containerd[1818]: time="2024-12-13T01:04:58.131960417Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.032209854s" Dec 13 01:04:58.132145 containerd[1818]: time="2024-12-13T01:04:58.132008818Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 01:04:58.156370 containerd[1818]: time="2024-12-13T01:04:58.156294931Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 01:04:58.806662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4275657629.mount: Deactivated successfully. Dec 13 01:04:58.826611 containerd[1818]: time="2024-12-13T01:04:58.826549442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:58.828572 containerd[1818]: time="2024-12-13T01:04:58.828503291Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Dec 13 01:04:58.833747 containerd[1818]: time="2024-12-13T01:04:58.833685322Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:58.837508 containerd[1818]: time="2024-12-13T01:04:58.837451517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:04:58.838728 containerd[1818]: time="2024-12-13T01:04:58.838171935Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 681.596097ms" Dec 13 01:04:58.838728 containerd[1818]: time="2024-12-13T01:04:58.838233336Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 01:04:58.863315 containerd[1818]: time="2024-12-13T01:04:58.863262068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 01:04:59.509254 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3727614631.mount: Deactivated successfully. Dec 13 01:05:01.760094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Dec 13 01:05:01.767535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:01.921423 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:01.934714 (kubelet)[2904]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 01:05:01.985619 kubelet[2904]: E1213 01:05:01.985548 2904 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 01:05:01.988802 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 01:05:01.989137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 01:05:02.556668 containerd[1818]: time="2024-12-13T01:05:02.556596352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:02.558823 containerd[1818]: time="2024-12-13T01:05:02.558746706Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Dec 13 01:05:02.562877 containerd[1818]: time="2024-12-13T01:05:02.562832009Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:02.568171 containerd[1818]: time="2024-12-13T01:05:02.568093042Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:02.569618 containerd[1818]: time="2024-12-13T01:05:02.569576080Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.70626791s" Dec 13 01:05:02.569618 containerd[1818]: time="2024-12-13T01:05:02.569620281Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 01:05:05.810228 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:05.817518 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:05.851527 systemd[1]: Reloading requested from client PID 2979 ('systemctl') (unit session-9.scope)... Dec 13 01:05:05.851553 systemd[1]: Reloading... Dec 13 01:05:06.019238 zram_generator::config[3019]: No configuration found. Dec 13 01:05:06.145965 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:05:06.222774 systemd[1]: Reloading finished in 370 ms. Dec 13 01:05:06.277420 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 01:05:06.277539 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 01:05:06.278331 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:06.284998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:06.464454 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:06.464772 (kubelet)[3101]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:05:06.518277 kubelet[3101]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:06.518277 kubelet[3101]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:05:06.518277 kubelet[3101]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:06.518905 kubelet[3101]: I1213 01:05:06.518369 3101 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:05:06.806159 kubelet[3101]: I1213 01:05:06.806100 3101 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:05:06.806159 kubelet[3101]: I1213 01:05:06.806148 3101 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:05:06.806546 kubelet[3101]: I1213 01:05:06.806521 3101 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:05:07.094282 kubelet[3101]: E1213 01:05:07.093859 3101 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.097331 kubelet[3101]: I1213 01:05:07.097129 3101 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:05:07.136027 kubelet[3101]: I1213 01:05:07.135987 3101 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:05:07.136588 kubelet[3101]: I1213 01:05:07.136562 3101 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:05:07.136825 kubelet[3101]: I1213 01:05:07.136804 3101 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:05:07.137018 kubelet[3101]: I1213 01:05:07.136840 3101 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:05:07.137018 kubelet[3101]: I1213 01:05:07.136854 3101 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:05:07.138191 kubelet[3101]: I1213 01:05:07.138158 3101 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:07.138362 kubelet[3101]: I1213 01:05:07.138345 3101 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:05:07.138425 kubelet[3101]: I1213 01:05:07.138374 3101 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:05:07.138425 kubelet[3101]: I1213 01:05:07.138416 3101 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:05:07.138500 kubelet[3101]: I1213 01:05:07.138439 3101 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:05:07.140317 kubelet[3101]: W1213 01:05:07.140094 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.140317 kubelet[3101]: E1213 01:05:07.140170 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.140317 kubelet[3101]: W1213 01:05:07.140262 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-672c6884da&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.140317 kubelet[3101]: E1213 01:05:07.140298 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-672c6884da&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.141784 kubelet[3101]: I1213 01:05:07.141439 3101 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:05:07.145262 kubelet[3101]: I1213 01:05:07.145240 3101 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:05:07.145440 kubelet[3101]: W1213 01:05:07.145428 3101 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 01:05:07.146569 kubelet[3101]: I1213 01:05:07.146417 3101 server.go:1256] "Started kubelet" Dec 13 01:05:07.146569 kubelet[3101]: I1213 01:05:07.146499 3101 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:05:07.148006 kubelet[3101]: I1213 01:05:07.147449 3101 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:05:07.150635 kubelet[3101]: I1213 01:05:07.150606 3101 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:05:07.152778 kubelet[3101]: I1213 01:05:07.151677 3101 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:05:07.152778 kubelet[3101]: I1213 01:05:07.151937 3101 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:05:07.156962 kubelet[3101]: E1213 01:05:07.156936 3101 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-672c6884da.18109705ab8c8bec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-672c6884da,UID:ci-4081.2.1-a-672c6884da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-672c6884da,},FirstTimestamp:2024-12-13 01:05:07.146386412 +0000 UTC m=+0.673384736,LastTimestamp:2024-12-13 01:05:07.146386412 +0000 UTC m=+0.673384736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-672c6884da,}" Dec 13 01:05:07.157263 kubelet[3101]: I1213 01:05:07.157251 3101 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:05:07.159869 kubelet[3101]: I1213 01:05:07.159844 3101 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:05:07.160025 kubelet[3101]: I1213 01:05:07.160014 3101 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:05:07.161804 kubelet[3101]: W1213 01:05:07.161755 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.161944 kubelet[3101]: E1213 01:05:07.161930 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.162327 kubelet[3101]: E1213 01:05:07.162308 3101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-672c6884da?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Dec 13 01:05:07.162631 kubelet[3101]: I1213 01:05:07.162615 3101 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:05:07.162819 kubelet[3101]: I1213 01:05:07.162800 3101 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:05:07.164535 kubelet[3101]: I1213 01:05:07.164507 3101 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:05:07.182508 kubelet[3101]: E1213 01:05:07.181608 3101 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:05:07.202360 kubelet[3101]: I1213 01:05:07.202086 3101 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:05:07.205167 kubelet[3101]: I1213 01:05:07.204670 3101 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:05:07.205167 kubelet[3101]: I1213 01:05:07.204721 3101 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:05:07.205167 kubelet[3101]: I1213 01:05:07.204751 3101 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:05:07.205167 kubelet[3101]: E1213 01:05:07.204812 3101 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:05:07.209831 kubelet[3101]: W1213 01:05:07.209772 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.209965 kubelet[3101]: E1213 01:05:07.209841 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:07.230435 kubelet[3101]: I1213 01:05:07.230396 3101 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:05:07.230435 kubelet[3101]: I1213 01:05:07.230426 3101 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:05:07.230690 kubelet[3101]: I1213 01:05:07.230477 3101 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:07.238636 kubelet[3101]: I1213 01:05:07.238586 3101 policy_none.go:49] "None policy: Start" Dec 13 01:05:07.239981 kubelet[3101]: I1213 01:05:07.239517 3101 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:05:07.239981 kubelet[3101]: I1213 01:05:07.239599 3101 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:05:07.250158 kubelet[3101]: I1213 01:05:07.250114 3101 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:05:07.250536 kubelet[3101]: I1213 01:05:07.250511 3101 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:05:07.256032 kubelet[3101]: E1213 01:05:07.255995 3101 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.1-a-672c6884da\" not found" Dec 13 01:05:07.260299 kubelet[3101]: I1213 01:05:07.260267 3101 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.260780 kubelet[3101]: E1213 01:05:07.260758 3101 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.305586 kubelet[3101]: I1213 01:05:07.305514 3101 topology_manager.go:215] "Topology Admit Handler" podUID="eb0be428310d4b470e8973c3b4c3586b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.308118 kubelet[3101]: I1213 01:05:07.308079 3101 topology_manager.go:215] "Topology Admit Handler" podUID="92f9cee8044c41d16ed48a33ff2c6be1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.311274 kubelet[3101]: I1213 01:05:07.310045 3101 topology_manager.go:215] "Topology Admit Handler" podUID="a7c084c44b590e5224ed7981cfadfb7f" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.361937 kubelet[3101]: I1213 01:05:07.361489 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb0be428310d4b470e8973c3b4c3586b-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-672c6884da\" (UID: \"eb0be428310d4b470e8973c3b4c3586b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.361937 kubelet[3101]: I1213 01:05:07.361562 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb0be428310d4b470e8973c3b4c3586b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-672c6884da\" (UID: \"eb0be428310d4b470e8973c3b4c3586b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.361937 kubelet[3101]: I1213 01:05:07.361596 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.361937 kubelet[3101]: I1213 01:05:07.361642 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7c084c44b590e5224ed7981cfadfb7f-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-672c6884da\" (UID: \"a7c084c44b590e5224ed7981cfadfb7f\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.361937 kubelet[3101]: I1213 01:05:07.361678 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb0be428310d4b470e8973c3b4c3586b-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-672c6884da\" (UID: \"eb0be428310d4b470e8973c3b4c3586b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.362340 kubelet[3101]: I1213 01:05:07.361708 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.362340 kubelet[3101]: I1213 01:05:07.361738 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.362340 kubelet[3101]: I1213 01:05:07.361766 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.362340 kubelet[3101]: I1213 01:05:07.361800 3101 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.363162 kubelet[3101]: E1213 01:05:07.362988 3101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-672c6884da?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Dec 13 01:05:07.464257 kubelet[3101]: I1213 01:05:07.464195 3101 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.464874 kubelet[3101]: E1213 01:05:07.464843 3101 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.615198 containerd[1818]: time="2024-12-13T01:05:07.615025205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-672c6884da,Uid:eb0be428310d4b470e8973c3b4c3586b,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:07.622000 containerd[1818]: time="2024-12-13T01:05:07.621740354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-672c6884da,Uid:a7c084c44b590e5224ed7981cfadfb7f,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:07.622000 containerd[1818]: time="2024-12-13T01:05:07.621740354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-672c6884da,Uid:92f9cee8044c41d16ed48a33ff2c6be1,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:07.764646 kubelet[3101]: E1213 01:05:07.764591 3101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-672c6884da?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Dec 13 01:05:07.867372 kubelet[3101]: I1213 01:05:07.867239 3101 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:07.867950 kubelet[3101]: E1213 01:05:07.867917 3101 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:08.101629 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4204022736.mount: Deactivated successfully. Dec 13 01:05:08.140713 containerd[1818]: time="2024-12-13T01:05:08.140524860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:08.143804 containerd[1818]: time="2024-12-13T01:05:08.143742332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 01:05:08.148025 containerd[1818]: time="2024-12-13T01:05:08.147980326Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:08.151222 containerd[1818]: time="2024-12-13T01:05:08.151169197Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:08.154523 containerd[1818]: time="2024-12-13T01:05:08.154474770Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:05:08.158149 containerd[1818]: time="2024-12-13T01:05:08.158105750Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:08.160175 containerd[1818]: time="2024-12-13T01:05:08.160104995Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 01:05:08.166955 containerd[1818]: time="2024-12-13T01:05:08.166777443Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 01:05:08.168264 containerd[1818]: time="2024-12-13T01:05:08.167892767Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 546.053411ms" Dec 13 01:05:08.169607 containerd[1818]: time="2024-12-13T01:05:08.169572005Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 554.425296ms" Dec 13 01:05:08.172302 containerd[1818]: time="2024-12-13T01:05:08.172267964Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 550.388907ms" Dec 13 01:05:08.274088 kubelet[3101]: W1213 01:05:08.274016 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-672c6884da&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.274088 kubelet[3101]: E1213 01:05:08.274099 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.1-a-672c6884da&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.286638 kubelet[3101]: W1213 01:05:08.286576 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.286638 kubelet[3101]: E1213 01:05:08.286643 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.402931 kubelet[3101]: W1213 01:05:08.402740 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.402931 kubelet[3101]: E1213 01:05:08.402827 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.565202 kubelet[3101]: E1213 01:05:08.565154 3101 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.1-a-672c6884da?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="1.6s" Dec 13 01:05:08.671786 kubelet[3101]: I1213 01:05:08.671640 3101 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:08.672450 kubelet[3101]: E1213 01:05:08.672333 3101 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:08.756977 kubelet[3101]: W1213 01:05:08.756917 3101 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.756977 kubelet[3101]: E1213 01:05:08.756977 3101 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:08.861349 kubelet[3101]: E1213 01:05:08.861296 3101 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.40:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.40:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.1-a-672c6884da.18109705ab8c8bec default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.1-a-672c6884da,UID:ci-4081.2.1-a-672c6884da,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.1-a-672c6884da,},FirstTimestamp:2024-12-13 01:05:07.146386412 +0000 UTC m=+0.673384736,LastTimestamp:2024-12-13 01:05:07.146386412 +0000 UTC m=+0.673384736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.1-a-672c6884da,}" Dec 13 01:05:08.958385 containerd[1818]: time="2024-12-13T01:05:08.956995369Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:08.958385 containerd[1818]: time="2024-12-13T01:05:08.957074871Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:08.958385 containerd[1818]: time="2024-12-13T01:05:08.957110071Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:08.958385 containerd[1818]: time="2024-12-13T01:05:08.957283175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:08.970484 containerd[1818]: time="2024-12-13T01:05:08.970136460Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:08.970484 containerd[1818]: time="2024-12-13T01:05:08.970277963Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:08.970484 containerd[1818]: time="2024-12-13T01:05:08.970346765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:08.970900 containerd[1818]: time="2024-12-13T01:05:08.970741574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:08.972846 containerd[1818]: time="2024-12-13T01:05:08.972736918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:08.974343 containerd[1818]: time="2024-12-13T01:05:08.974293452Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:08.974434 containerd[1818]: time="2024-12-13T01:05:08.974368254Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:08.976227 containerd[1818]: time="2024-12-13T01:05:08.975194172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:09.104819 containerd[1818]: time="2024-12-13T01:05:09.104762846Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.1-a-672c6884da,Uid:92f9cee8044c41d16ed48a33ff2c6be1,Namespace:kube-system,Attempt:0,} returns sandbox id \"9312804a85ebb598085b311c659aa932e9a0d2d7abbb4b25894136b27bdcb564\"" Dec 13 01:05:09.115568 containerd[1818]: time="2024-12-13T01:05:09.115514285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.1-a-672c6884da,Uid:a7c084c44b590e5224ed7981cfadfb7f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ae25c26812f8bf53cf0fadc201726382adbd1439fed8b87e7456723de50180c3\"" Dec 13 01:05:09.116336 containerd[1818]: time="2024-12-13T01:05:09.116292702Z" level=info msg="CreateContainer within sandbox \"9312804a85ebb598085b311c659aa932e9a0d2d7abbb4b25894136b27bdcb564\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 01:05:09.121673 containerd[1818]: time="2024-12-13T01:05:09.121637420Z" level=info msg="CreateContainer within sandbox \"ae25c26812f8bf53cf0fadc201726382adbd1439fed8b87e7456723de50180c3\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 01:05:09.125447 containerd[1818]: time="2024-12-13T01:05:09.125411204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.1-a-672c6884da,Uid:eb0be428310d4b470e8973c3b4c3586b,Namespace:kube-system,Attempt:0,} returns sandbox id \"e297971ac31ad3812b53fe682c2c4c70615484e64e299051d7e9e0c196e403cc\"" Dec 13 01:05:09.129083 containerd[1818]: time="2024-12-13T01:05:09.129041685Z" level=info msg="CreateContainer within sandbox \"e297971ac31ad3812b53fe682c2c4c70615484e64e299051d7e9e0c196e403cc\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 01:05:09.155989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1213658613.mount: Deactivated successfully. Dec 13 01:05:09.192965 containerd[1818]: time="2024-12-13T01:05:09.192904701Z" level=info msg="CreateContainer within sandbox \"9312804a85ebb598085b311c659aa932e9a0d2d7abbb4b25894136b27bdcb564\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"556bc601b4bc138e012425705d010801829a7049ace9c586124bd143216e5089\"" Dec 13 01:05:09.193941 containerd[1818]: time="2024-12-13T01:05:09.193901923Z" level=info msg="StartContainer for \"556bc601b4bc138e012425705d010801829a7049ace9c586124bd143216e5089\"" Dec 13 01:05:09.201726 containerd[1818]: time="2024-12-13T01:05:09.201680796Z" level=info msg="CreateContainer within sandbox \"e297971ac31ad3812b53fe682c2c4c70615484e64e299051d7e9e0c196e403cc\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"739eaa477560159435d90daf8ddd251c000a099872516409459087069ff047cb\"" Dec 13 01:05:09.203094 containerd[1818]: time="2024-12-13T01:05:09.203064726Z" level=info msg="StartContainer for \"739eaa477560159435d90daf8ddd251c000a099872516409459087069ff047cb\"" Dec 13 01:05:09.205931 containerd[1818]: time="2024-12-13T01:05:09.205778487Z" level=info msg="CreateContainer within sandbox \"ae25c26812f8bf53cf0fadc201726382adbd1439fed8b87e7456723de50180c3\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"e5177b1614cf04ab85eb2608d9c0bbafb876d1ee11bbfac01b6a073dabdaf30b\"" Dec 13 01:05:09.206330 containerd[1818]: time="2024-12-13T01:05:09.206306698Z" level=info msg="StartContainer for \"e5177b1614cf04ab85eb2608d9c0bbafb876d1ee11bbfac01b6a073dabdaf30b\"" Dec 13 01:05:09.297787 kubelet[3101]: E1213 01:05:09.297645 3101 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.40:6443: connect: connection refused Dec 13 01:05:09.353478 containerd[1818]: time="2024-12-13T01:05:09.353426961Z" level=info msg="StartContainer for \"739eaa477560159435d90daf8ddd251c000a099872516409459087069ff047cb\" returns successfully" Dec 13 01:05:09.403237 containerd[1818]: time="2024-12-13T01:05:09.402546951Z" level=info msg="StartContainer for \"e5177b1614cf04ab85eb2608d9c0bbafb876d1ee11bbfac01b6a073dabdaf30b\" returns successfully" Dec 13 01:05:09.405694 containerd[1818]: time="2024-12-13T01:05:09.405361413Z" level=info msg="StartContainer for \"556bc601b4bc138e012425705d010801829a7049ace9c586124bd143216e5089\" returns successfully" Dec 13 01:05:10.278292 kubelet[3101]: I1213 01:05:10.276423 3101 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:11.702257 kubelet[3101]: E1213 01:05:11.702167 3101 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.1-a-672c6884da\" not found" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:11.812245 kubelet[3101]: I1213 01:05:11.810738 3101 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:12.144452 kubelet[3101]: I1213 01:05:12.143137 3101 apiserver.go:52] "Watching apiserver" Dec 13 01:05:12.160369 kubelet[3101]: I1213 01:05:12.160315 3101 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:05:12.293332 kubelet[3101]: E1213 01:05:12.293108 3101 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-672c6884da\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:14.764935 systemd[1]: Reloading requested from client PID 3372 ('systemctl') (unit session-9.scope)... Dec 13 01:05:14.764957 systemd[1]: Reloading... Dec 13 01:05:14.865373 zram_generator::config[3410]: No configuration found. Dec 13 01:05:15.025229 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 01:05:15.112810 systemd[1]: Reloading finished in 347 ms. Dec 13 01:05:15.159801 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:15.160163 kubelet[3101]: I1213 01:05:15.159843 3101 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:05:15.175755 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 01:05:15.176340 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:15.188323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 01:05:15.442071 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 01:05:15.453763 (kubelet)[3489]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 01:05:15.509255 kubelet[3489]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:15.509753 kubelet[3489]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 01:05:15.509753 kubelet[3489]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 01:05:15.509917 kubelet[3489]: I1213 01:05:15.509860 3489 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 01:05:15.514909 kubelet[3489]: I1213 01:05:15.514873 3489 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 01:05:15.514909 kubelet[3489]: I1213 01:05:15.514900 3489 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 01:05:15.515156 kubelet[3489]: I1213 01:05:15.515137 3489 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 01:05:15.516769 kubelet[3489]: I1213 01:05:15.516741 3489 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 01:05:15.519486 kubelet[3489]: I1213 01:05:15.519080 3489 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 01:05:15.526166 kubelet[3489]: I1213 01:05:15.526139 3489 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 01:05:15.526705 kubelet[3489]: I1213 01:05:15.526682 3489 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 01:05:15.526946 kubelet[3489]: I1213 01:05:15.526912 3489 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 01:05:15.527120 kubelet[3489]: I1213 01:05:15.526958 3489 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 01:05:15.527810 kubelet[3489]: I1213 01:05:15.527773 3489 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 01:05:15.527862 kubelet[3489]: I1213 01:05:15.527842 3489 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:15.527994 kubelet[3489]: I1213 01:05:15.527979 3489 kubelet.go:396] "Attempting to sync node with API server" Dec 13 01:05:15.528044 kubelet[3489]: I1213 01:05:15.528002 3489 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 01:05:15.528044 kubelet[3489]: I1213 01:05:15.528043 3489 kubelet.go:312] "Adding apiserver pod source" Dec 13 01:05:15.528126 kubelet[3489]: I1213 01:05:15.528060 3489 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 01:05:15.536340 kubelet[3489]: I1213 01:05:15.532935 3489 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Dec 13 01:05:15.536340 kubelet[3489]: I1213 01:05:15.533163 3489 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 01:05:15.536340 kubelet[3489]: I1213 01:05:15.533799 3489 server.go:1256] "Started kubelet" Dec 13 01:05:15.536340 kubelet[3489]: I1213 01:05:15.534592 3489 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 01:05:15.536340 kubelet[3489]: I1213 01:05:15.535603 3489 server.go:461] "Adding debug handlers to kubelet server" Dec 13 01:05:15.543715 kubelet[3489]: I1213 01:05:15.543251 3489 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 01:05:15.543715 kubelet[3489]: I1213 01:05:15.543562 3489 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 01:05:15.548552 kubelet[3489]: I1213 01:05:15.547529 3489 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 01:05:15.555188 kubelet[3489]: I1213 01:05:15.555147 3489 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 01:05:15.567922 kubelet[3489]: I1213 01:05:15.563014 3489 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 01:05:15.567922 kubelet[3489]: I1213 01:05:15.563223 3489 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 01:05:15.576724 kubelet[3489]: I1213 01:05:15.574622 3489 factory.go:221] Registration of the systemd container factory successfully Dec 13 01:05:15.577035 kubelet[3489]: I1213 01:05:15.577002 3489 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 01:05:15.579245 kubelet[3489]: I1213 01:05:15.579200 3489 factory.go:221] Registration of the containerd container factory successfully Dec 13 01:05:15.580372 kubelet[3489]: I1213 01:05:15.580331 3489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 01:05:15.582014 kubelet[3489]: I1213 01:05:15.581992 3489 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 01:05:15.582083 kubelet[3489]: I1213 01:05:15.582075 3489 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 01:05:15.582147 kubelet[3489]: I1213 01:05:15.582102 3489 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 01:05:15.582190 kubelet[3489]: E1213 01:05:15.582175 3489 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 01:05:15.596018 kubelet[3489]: E1213 01:05:15.595981 3489 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 01:05:15.662693 kubelet[3489]: I1213 01:05:15.662642 3489 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.671042 kubelet[3489]: I1213 01:05:15.671002 3489 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 01:05:15.671042 kubelet[3489]: I1213 01:05:15.671026 3489 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 01:05:15.671042 kubelet[3489]: I1213 01:05:15.671047 3489 state_mem.go:36] "Initialized new in-memory state store" Dec 13 01:05:15.671338 kubelet[3489]: I1213 01:05:15.671278 3489 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 01:05:15.671338 kubelet[3489]: I1213 01:05:15.671313 3489 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 01:05:15.671338 kubelet[3489]: I1213 01:05:15.671324 3489 policy_none.go:49] "None policy: Start" Dec 13 01:05:15.672169 kubelet[3489]: I1213 01:05:15.672149 3489 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 01:05:15.672261 kubelet[3489]: I1213 01:05:15.672179 3489 state_mem.go:35] "Initializing new in-memory state store" Dec 13 01:05:15.672448 kubelet[3489]: I1213 01:05:15.672424 3489 state_mem.go:75] "Updated machine memory state" Dec 13 01:05:15.673892 kubelet[3489]: I1213 01:05:15.673868 3489 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 01:05:15.676616 kubelet[3489]: I1213 01:05:15.674223 3489 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 01:05:15.679332 kubelet[3489]: I1213 01:05:15.678827 3489 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.679332 kubelet[3489]: I1213 01:05:15.678931 3489 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.684302 kubelet[3489]: I1213 01:05:15.684257 3489 topology_manager.go:215] "Topology Admit Handler" podUID="eb0be428310d4b470e8973c3b4c3586b" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.684411 kubelet[3489]: I1213 01:05:15.684402 3489 topology_manager.go:215] "Topology Admit Handler" podUID="92f9cee8044c41d16ed48a33ff2c6be1" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.686299 kubelet[3489]: I1213 01:05:15.684465 3489 topology_manager.go:215] "Topology Admit Handler" podUID="a7c084c44b590e5224ed7981cfadfb7f" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.703908 kubelet[3489]: W1213 01:05:15.701089 3489 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:05:15.703908 kubelet[3489]: W1213 01:05:15.701948 3489 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:05:15.703908 kubelet[3489]: W1213 01:05:15.702021 3489 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:05:15.865300 kubelet[3489]: I1213 01:05:15.865023 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/eb0be428310d4b470e8973c3b4c3586b-k8s-certs\") pod \"kube-apiserver-ci-4081.2.1-a-672c6884da\" (UID: \"eb0be428310d4b470e8973c3b4c3586b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.865701 kubelet[3489]: I1213 01:05:15.865681 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.865944 kubelet[3489]: I1213 01:05:15.865931 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.866157 kubelet[3489]: I1213 01:05:15.866097 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.866407 kubelet[3489]: I1213 01:05:15.866256 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.866407 kubelet[3489]: I1213 01:05:15.866338 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/eb0be428310d4b470e8973c3b4c3586b-ca-certs\") pod \"kube-apiserver-ci-4081.2.1-a-672c6884da\" (UID: \"eb0be428310d4b470e8973c3b4c3586b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.866720 kubelet[3489]: I1213 01:05:15.866490 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/eb0be428310d4b470e8973c3b4c3586b-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.1-a-672c6884da\" (UID: \"eb0be428310d4b470e8973c3b4c3586b\") " pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.866720 kubelet[3489]: I1213 01:05:15.866532 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/92f9cee8044c41d16ed48a33ff2c6be1-ca-certs\") pod \"kube-controller-manager-ci-4081.2.1-a-672c6884da\" (UID: \"92f9cee8044c41d16ed48a33ff2c6be1\") " pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" Dec 13 01:05:15.866720 kubelet[3489]: I1213 01:05:15.866685 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a7c084c44b590e5224ed7981cfadfb7f-kubeconfig\") pod \"kube-scheduler-ci-4081.2.1-a-672c6884da\" (UID: \"a7c084c44b590e5224ed7981cfadfb7f\") " pod="kube-system/kube-scheduler-ci-4081.2.1-a-672c6884da" Dec 13 01:05:16.529417 kubelet[3489]: I1213 01:05:16.529359 3489 apiserver.go:52] "Watching apiserver" Dec 13 01:05:16.563938 kubelet[3489]: I1213 01:05:16.563867 3489 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 01:05:16.645071 kubelet[3489]: W1213 01:05:16.644968 3489 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 01:05:16.645071 kubelet[3489]: E1213 01:05:16.645058 3489 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.1-a-672c6884da\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" Dec 13 01:05:16.700771 kubelet[3489]: I1213 01:05:16.700691 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.1-a-672c6884da" podStartSLOduration=1.700626072 podStartE2EDuration="1.700626072s" podCreationTimestamp="2024-12-13 01:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:16.680928513 +0000 UTC m=+1.221912813" watchObservedRunningTime="2024-12-13 01:05:16.700626072 +0000 UTC m=+1.241610372" Dec 13 01:05:16.713130 kubelet[3489]: I1213 01:05:16.713088 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.1-a-672c6884da" podStartSLOduration=1.7130408620000002 podStartE2EDuration="1.713040862s" podCreationTimestamp="2024-12-13 01:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:16.70009186 +0000 UTC m=+1.241076160" watchObservedRunningTime="2024-12-13 01:05:16.713040862 +0000 UTC m=+1.254025262" Dec 13 01:05:16.713464 kubelet[3489]: I1213 01:05:16.713219 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.1-a-672c6884da" podStartSLOduration=1.7131856650000001 podStartE2EDuration="1.713185665s" podCreationTimestamp="2024-12-13 01:05:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:16.712266144 +0000 UTC m=+1.253250544" watchObservedRunningTime="2024-12-13 01:05:16.713185665 +0000 UTC m=+1.254170065" Dec 13 01:05:21.422631 sudo[2410]: pam_unix(sudo:session): session closed for user root Dec 13 01:05:21.529572 sshd[2406]: pam_unix(sshd:session): session closed for user core Dec 13 01:05:21.533413 systemd[1]: sshd@6-10.200.8.40:22-10.200.16.10:51414.service: Deactivated successfully. Dec 13 01:05:21.539260 systemd-logind[1788]: Session 9 logged out. Waiting for processes to exit. Dec 13 01:05:21.540299 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 01:05:21.541640 systemd-logind[1788]: Removed session 9. Dec 13 01:05:28.306698 kubelet[3489]: I1213 01:05:28.306640 3489 topology_manager.go:215] "Topology Admit Handler" podUID="fadb7ebd-263e-4c1e-8963-a1d7c32faa11" podNamespace="kube-system" podName="kube-proxy-ct6v6" Dec 13 01:05:28.354899 kubelet[3489]: I1213 01:05:28.354858 3489 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 01:05:28.357027 kubelet[3489]: I1213 01:05:28.356525 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/fadb7ebd-263e-4c1e-8963-a1d7c32faa11-kube-proxy\") pod \"kube-proxy-ct6v6\" (UID: \"fadb7ebd-263e-4c1e-8963-a1d7c32faa11\") " pod="kube-system/kube-proxy-ct6v6" Dec 13 01:05:28.357027 kubelet[3489]: I1213 01:05:28.356998 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x6p7\" (UniqueName: \"kubernetes.io/projected/fadb7ebd-263e-4c1e-8963-a1d7c32faa11-kube-api-access-5x6p7\") pod \"kube-proxy-ct6v6\" (UID: \"fadb7ebd-263e-4c1e-8963-a1d7c32faa11\") " pod="kube-system/kube-proxy-ct6v6" Dec 13 01:05:28.357233 containerd[1818]: time="2024-12-13T01:05:28.356807267Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 01:05:28.359589 kubelet[3489]: I1213 01:05:28.357915 3489 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 01:05:28.359589 kubelet[3489]: I1213 01:05:28.358353 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fadb7ebd-263e-4c1e-8963-a1d7c32faa11-xtables-lock\") pod \"kube-proxy-ct6v6\" (UID: \"fadb7ebd-263e-4c1e-8963-a1d7c32faa11\") " pod="kube-system/kube-proxy-ct6v6" Dec 13 01:05:28.359589 kubelet[3489]: I1213 01:05:28.358393 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fadb7ebd-263e-4c1e-8963-a1d7c32faa11-lib-modules\") pod \"kube-proxy-ct6v6\" (UID: \"fadb7ebd-263e-4c1e-8963-a1d7c32faa11\") " pod="kube-system/kube-proxy-ct6v6" Dec 13 01:05:28.467175 kubelet[3489]: I1213 01:05:28.467117 3489 topology_manager.go:215] "Topology Admit Handler" podUID="a5d331fc-ef4a-433d-9c84-b68c60b99908" podNamespace="tigera-operator" podName="tigera-operator-c7ccbd65-2sz6s" Dec 13 01:05:28.560735 kubelet[3489]: I1213 01:05:28.560549 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/a5d331fc-ef4a-433d-9c84-b68c60b99908-var-lib-calico\") pod \"tigera-operator-c7ccbd65-2sz6s\" (UID: \"a5d331fc-ef4a-433d-9c84-b68c60b99908\") " pod="tigera-operator/tigera-operator-c7ccbd65-2sz6s" Dec 13 01:05:28.560735 kubelet[3489]: I1213 01:05:28.560616 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bmss\" (UniqueName: \"kubernetes.io/projected/a5d331fc-ef4a-433d-9c84-b68c60b99908-kube-api-access-7bmss\") pod \"tigera-operator-c7ccbd65-2sz6s\" (UID: \"a5d331fc-ef4a-433d-9c84-b68c60b99908\") " pod="tigera-operator/tigera-operator-c7ccbd65-2sz6s" Dec 13 01:05:28.619171 containerd[1818]: time="2024-12-13T01:05:28.619121678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ct6v6,Uid:fadb7ebd-263e-4c1e-8963-a1d7c32faa11,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:28.776771 containerd[1818]: time="2024-12-13T01:05:28.776707309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-2sz6s,Uid:a5d331fc-ef4a-433d-9c84-b68c60b99908,Namespace:tigera-operator,Attempt:0,}" Dec 13 01:05:30.047145 containerd[1818]: time="2024-12-13T01:05:30.046960465Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:30.047998 containerd[1818]: time="2024-12-13T01:05:30.047584579Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:30.049579 containerd[1818]: time="2024-12-13T01:05:30.048676103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.049579 containerd[1818]: time="2024-12-13T01:05:30.048810705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.064779 containerd[1818]: time="2024-12-13T01:05:30.064583349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:30.065126 containerd[1818]: time="2024-12-13T01:05:30.065072460Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:30.065380 containerd[1818]: time="2024-12-13T01:05:30.065323665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.067597 containerd[1818]: time="2024-12-13T01:05:30.067550213Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:30.148031 containerd[1818]: time="2024-12-13T01:05:30.147978365Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ct6v6,Uid:fadb7ebd-263e-4c1e-8963-a1d7c32faa11,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd30f101710cbbee06d24948a5b5134f9e63ca8d23b6fdc50b5b6ca4e6b5077c\"" Dec 13 01:05:30.156128 containerd[1818]: time="2024-12-13T01:05:30.156046640Z" level=info msg="CreateContainer within sandbox \"bd30f101710cbbee06d24948a5b5134f9e63ca8d23b6fdc50b5b6ca4e6b5077c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 01:05:30.179824 containerd[1818]: time="2024-12-13T01:05:30.179775457Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-c7ccbd65-2sz6s,Uid:a5d331fc-ef4a-433d-9c84-b68c60b99908,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4861882c79097025cf17ffe4642fa8cbc30d1cca5e088dd8cc687e6194e49bc2\"" Dec 13 01:05:30.182328 containerd[1818]: time="2024-12-13T01:05:30.182040706Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Dec 13 01:05:30.199531 containerd[1818]: time="2024-12-13T01:05:30.199438285Z" level=info msg="CreateContainer within sandbox \"bd30f101710cbbee06d24948a5b5134f9e63ca8d23b6fdc50b5b6ca4e6b5077c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4166ca05b429694c6ecd6d4167ae70a1ad62973b08ffa0e0b5d82149138769f0\"" Dec 13 01:05:30.200300 containerd[1818]: time="2024-12-13T01:05:30.200263503Z" level=info msg="StartContainer for \"4166ca05b429694c6ecd6d4167ae70a1ad62973b08ffa0e0b5d82149138769f0\"" Dec 13 01:05:30.279072 containerd[1818]: time="2024-12-13T01:05:30.279006717Z" level=info msg="StartContainer for \"4166ca05b429694c6ecd6d4167ae70a1ad62973b08ffa0e0b5d82149138769f0\" returns successfully" Dec 13 01:05:33.456540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007588526.mount: Deactivated successfully. Dec 13 01:05:35.186676 containerd[1818]: time="2024-12-13T01:05:35.186604366Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:35.189713 containerd[1818]: time="2024-12-13T01:05:35.189537230Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21764289" Dec 13 01:05:35.192845 containerd[1818]: time="2024-12-13T01:05:35.192785900Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:35.199140 containerd[1818]: time="2024-12-13T01:05:35.199099238Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:35.200119 containerd[1818]: time="2024-12-13T01:05:35.199930056Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 5.017842549s" Dec 13 01:05:35.200119 containerd[1818]: time="2024-12-13T01:05:35.199975357Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Dec 13 01:05:35.203782 containerd[1818]: time="2024-12-13T01:05:35.203747539Z" level=info msg="CreateContainer within sandbox \"4861882c79097025cf17ffe4642fa8cbc30d1cca5e088dd8cc687e6194e49bc2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Dec 13 01:05:35.245691 containerd[1818]: time="2024-12-13T01:05:35.245638051Z" level=info msg="CreateContainer within sandbox \"4861882c79097025cf17ffe4642fa8cbc30d1cca5e088dd8cc687e6194e49bc2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"ed95adfea0e3b31226a817a06183114139b362a4f4d8fece4b59ef6fe4db6804\"" Dec 13 01:05:35.246928 containerd[1818]: time="2024-12-13T01:05:35.246748875Z" level=info msg="StartContainer for \"ed95adfea0e3b31226a817a06183114139b362a4f4d8fece4b59ef6fe4db6804\"" Dec 13 01:05:35.318543 containerd[1818]: time="2024-12-13T01:05:35.318489337Z" level=info msg="StartContainer for \"ed95adfea0e3b31226a817a06183114139b362a4f4d8fece4b59ef6fe4db6804\" returns successfully" Dec 13 01:05:35.601372 kubelet[3489]: I1213 01:05:35.600990 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-ct6v6" podStartSLOduration=7.600927686 podStartE2EDuration="7.600927686s" podCreationTimestamp="2024-12-13 01:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:05:30.69756073 +0000 UTC m=+15.238545030" watchObservedRunningTime="2024-12-13 01:05:35.600927686 +0000 UTC m=+20.141911986" Dec 13 01:05:35.701416 kubelet[3489]: I1213 01:05:35.700607 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-c7ccbd65-2sz6s" podStartSLOduration=2.681146973 podStartE2EDuration="7.700549155s" podCreationTimestamp="2024-12-13 01:05:28 +0000 UTC" firstStartedPulling="2024-12-13 01:05:30.18127469 +0000 UTC m=+14.722258990" lastFinishedPulling="2024-12-13 01:05:35.200676772 +0000 UTC m=+19.741661172" observedRunningTime="2024-12-13 01:05:35.700328751 +0000 UTC m=+20.241313051" watchObservedRunningTime="2024-12-13 01:05:35.700549155 +0000 UTC m=+20.241533455" Dec 13 01:05:38.556377 kubelet[3489]: I1213 01:05:38.556320 3489 topology_manager.go:215] "Topology Admit Handler" podUID="2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f" podNamespace="calico-system" podName="calico-typha-8494cd77d8-5bb52" Dec 13 01:05:38.628952 kubelet[3489]: I1213 01:05:38.628785 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f-tigera-ca-bundle\") pod \"calico-typha-8494cd77d8-5bb52\" (UID: \"2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f\") " pod="calico-system/calico-typha-8494cd77d8-5bb52" Dec 13 01:05:38.628952 kubelet[3489]: I1213 01:05:38.628842 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9d98\" (UniqueName: \"kubernetes.io/projected/2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f-kube-api-access-c9d98\") pod \"calico-typha-8494cd77d8-5bb52\" (UID: \"2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f\") " pod="calico-system/calico-typha-8494cd77d8-5bb52" Dec 13 01:05:38.628952 kubelet[3489]: I1213 01:05:38.628871 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f-typha-certs\") pod \"calico-typha-8494cd77d8-5bb52\" (UID: \"2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f\") " pod="calico-system/calico-typha-8494cd77d8-5bb52" Dec 13 01:05:38.760072 kubelet[3489]: I1213 01:05:38.756496 3489 topology_manager.go:215] "Topology Admit Handler" podUID="040ec1d4-282b-45c0-a72e-2bd97bdc4265" podNamespace="calico-system" podName="calico-node-m2thv" Dec 13 01:05:38.830572 kubelet[3489]: I1213 01:05:38.830166 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-policysync\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831597 kubelet[3489]: I1213 01:05:38.831350 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-var-run-calico\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831597 kubelet[3489]: I1213 01:05:38.831427 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-flexvol-driver-host\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831597 kubelet[3489]: I1213 01:05:38.831462 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfdcv\" (UniqueName: \"kubernetes.io/projected/040ec1d4-282b-45c0-a72e-2bd97bdc4265-kube-api-access-tfdcv\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831597 kubelet[3489]: I1213 01:05:38.831517 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-var-lib-calico\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831867 kubelet[3489]: I1213 01:05:38.831659 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/040ec1d4-282b-45c0-a72e-2bd97bdc4265-node-certs\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831867 kubelet[3489]: I1213 01:05:38.831694 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-cni-bin-dir\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.831867 kubelet[3489]: I1213 01:05:38.831742 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-cni-net-dir\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.832802 kubelet[3489]: I1213 01:05:38.832035 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/040ec1d4-282b-45c0-a72e-2bd97bdc4265-tigera-ca-bundle\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.832802 kubelet[3489]: I1213 01:05:38.832111 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-lib-modules\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.832802 kubelet[3489]: I1213 01:05:38.832149 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-xtables-lock\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.833576 kubelet[3489]: I1213 01:05:38.832854 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/040ec1d4-282b-45c0-a72e-2bd97bdc4265-cni-log-dir\") pod \"calico-node-m2thv\" (UID: \"040ec1d4-282b-45c0-a72e-2bd97bdc4265\") " pod="calico-system/calico-node-m2thv" Dec 13 01:05:38.870477 containerd[1818]: time="2024-12-13T01:05:38.868465951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8494cd77d8-5bb52,Uid:2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f,Namespace:calico-system,Attempt:0,}" Dec 13 01:05:38.883234 kubelet[3489]: I1213 01:05:38.883167 3489 topology_manager.go:215] "Topology Admit Handler" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" podNamespace="calico-system" podName="csi-node-driver-z6cd5" Dec 13 01:05:38.885333 kubelet[3489]: E1213 01:05:38.884259 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:38.933810 kubelet[3489]: I1213 01:05:38.933635 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/52cded57-51a5-4d1e-9829-02d4ac1d0d2d-socket-dir\") pod \"csi-node-driver-z6cd5\" (UID: \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\") " pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:38.933810 kubelet[3489]: I1213 01:05:38.933714 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5k9cf\" (UniqueName: \"kubernetes.io/projected/52cded57-51a5-4d1e-9829-02d4ac1d0d2d-kube-api-access-5k9cf\") pod \"csi-node-driver-z6cd5\" (UID: \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\") " pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:38.934133 kubelet[3489]: I1213 01:05:38.934041 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/52cded57-51a5-4d1e-9829-02d4ac1d0d2d-registration-dir\") pod \"csi-node-driver-z6cd5\" (UID: \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\") " pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:38.934133 kubelet[3489]: I1213 01:05:38.934108 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/52cded57-51a5-4d1e-9829-02d4ac1d0d2d-varrun\") pod \"csi-node-driver-z6cd5\" (UID: \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\") " pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:38.934239 kubelet[3489]: I1213 01:05:38.934198 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/52cded57-51a5-4d1e-9829-02d4ac1d0d2d-kubelet-dir\") pod \"csi-node-driver-z6cd5\" (UID: \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\") " pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:38.937391 kubelet[3489]: E1213 01:05:38.936793 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.937391 kubelet[3489]: W1213 01:05:38.936841 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.937391 kubelet[3489]: E1213 01:05:38.936878 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.937391 kubelet[3489]: E1213 01:05:38.937248 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.937391 kubelet[3489]: W1213 01:05:38.937264 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.937391 kubelet[3489]: E1213 01:05:38.937286 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.938452 kubelet[3489]: E1213 01:05:38.938168 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.938452 kubelet[3489]: W1213 01:05:38.938203 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.938452 kubelet[3489]: E1213 01:05:38.938237 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.940028 kubelet[3489]: E1213 01:05:38.939655 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.940028 kubelet[3489]: W1213 01:05:38.939728 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.940028 kubelet[3489]: E1213 01:05:38.939750 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.952372 kubelet[3489]: E1213 01:05:38.951361 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.952372 kubelet[3489]: W1213 01:05:38.951386 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.952372 kubelet[3489]: E1213 01:05:38.951418 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.954393 kubelet[3489]: E1213 01:05:38.954377 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.954450 kubelet[3489]: W1213 01:05:38.954396 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.956349 kubelet[3489]: E1213 01:05:38.956321 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.964018 kubelet[3489]: E1213 01:05:38.963901 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.964018 kubelet[3489]: W1213 01:05:38.963921 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.964018 kubelet[3489]: E1213 01:05:38.963956 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.974606 kubelet[3489]: E1213 01:05:38.973341 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.974606 kubelet[3489]: W1213 01:05:38.974268 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.974606 kubelet[3489]: E1213 01:05:38.974306 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.982722 kubelet[3489]: E1213 01:05:38.982685 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.982722 kubelet[3489]: W1213 01:05:38.982727 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.983098 kubelet[3489]: E1213 01:05:38.982758 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.994691 kubelet[3489]: E1213 01:05:38.993439 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.994691 kubelet[3489]: W1213 01:05:38.993470 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.994691 kubelet[3489]: E1213 01:05:38.993504 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:38.997300 kubelet[3489]: E1213 01:05:38.997237 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:38.997300 kubelet[3489]: W1213 01:05:38.997259 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:38.997564 kubelet[3489]: E1213 01:05:38.997285 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.011991 containerd[1818]: time="2024-12-13T01:05:39.011798156Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:39.016450 containerd[1818]: time="2024-12-13T01:05:39.014168308Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:39.016450 containerd[1818]: time="2024-12-13T01:05:39.014222409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:39.019244 containerd[1818]: time="2024-12-13T01:05:39.018534802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:39.037608 kubelet[3489]: E1213 01:05:39.037565 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.038084 kubelet[3489]: W1213 01:05:39.037945 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.038721 kubelet[3489]: E1213 01:05:39.037989 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.039439 kubelet[3489]: E1213 01:05:39.039403 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.039439 kubelet[3489]: W1213 01:05:39.039418 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.039924 kubelet[3489]: E1213 01:05:39.039688 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.040514 kubelet[3489]: E1213 01:05:39.040493 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.040514 kubelet[3489]: W1213 01:05:39.040513 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.040547 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.040793 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.042720 kubelet[3489]: W1213 01:05:39.040802 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.040830 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.041074 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.042720 kubelet[3489]: W1213 01:05:39.041085 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.041566 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.041707 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.042720 kubelet[3489]: W1213 01:05:39.041722 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.042720 kubelet[3489]: E1213 01:05:39.041744 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.043135 kubelet[3489]: E1213 01:05:39.042515 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.043135 kubelet[3489]: W1213 01:05:39.042556 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.043135 kubelet[3489]: E1213 01:05:39.042576 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.043135 kubelet[3489]: E1213 01:05:39.042935 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.043135 kubelet[3489]: W1213 01:05:39.042947 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.043135 kubelet[3489]: E1213 01:05:39.042972 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.043409 kubelet[3489]: E1213 01:05:39.043384 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.043409 kubelet[3489]: W1213 01:05:39.043395 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.043486 kubelet[3489]: E1213 01:05:39.043434 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.046093 kubelet[3489]: E1213 01:05:39.046074 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.046093 kubelet[3489]: W1213 01:05:39.046093 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.046276 kubelet[3489]: E1213 01:05:39.046116 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.047818 kubelet[3489]: E1213 01:05:39.047706 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.048468 kubelet[3489]: W1213 01:05:39.048331 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.048468 kubelet[3489]: E1213 01:05:39.048438 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.048719 kubelet[3489]: E1213 01:05:39.048645 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.048719 kubelet[3489]: W1213 01:05:39.048672 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.048820 kubelet[3489]: E1213 01:05:39.048751 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.049204 kubelet[3489]: E1213 01:05:39.048938 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.049204 kubelet[3489]: W1213 01:05:39.048951 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.049204 kubelet[3489]: E1213 01:05:39.049049 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.053804 kubelet[3489]: E1213 01:05:39.053389 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.053804 kubelet[3489]: W1213 01:05:39.053412 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.053804 kubelet[3489]: E1213 01:05:39.053593 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.055840 kubelet[3489]: E1213 01:05:39.055817 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.055840 kubelet[3489]: W1213 01:05:39.055839 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.057568 kubelet[3489]: E1213 01:05:39.056067 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.057568 kubelet[3489]: W1213 01:05:39.056079 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.057742 kubelet[3489]: E1213 01:05:39.057729 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.057992 kubelet[3489]: W1213 01:05:39.057805 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.058322 kubelet[3489]: E1213 01:05:39.058101 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.058322 kubelet[3489]: W1213 01:05:39.058114 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.058322 kubelet[3489]: E1213 01:05:39.058135 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.058322 kubelet[3489]: E1213 01:05:39.058139 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.058322 kubelet[3489]: E1213 01:05:39.058182 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.058322 kubelet[3489]: E1213 01:05:39.058200 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.058813 kubelet[3489]: E1213 01:05:39.058623 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.058813 kubelet[3489]: W1213 01:05:39.058646 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.058813 kubelet[3489]: E1213 01:05:39.058667 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.059865 kubelet[3489]: E1213 01:05:39.059830 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.059865 kubelet[3489]: W1213 01:05:39.059843 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.060365 kubelet[3489]: E1213 01:05:39.060349 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.061363 kubelet[3489]: E1213 01:05:39.061263 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.061363 kubelet[3489]: W1213 01:05:39.061277 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.062714 kubelet[3489]: E1213 01:05:39.062613 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.062714 kubelet[3489]: W1213 01:05:39.062627 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.064322 kubelet[3489]: E1213 01:05:39.063785 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.064322 kubelet[3489]: W1213 01:05:39.063800 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.064322 kubelet[3489]: E1213 01:05:39.063817 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.068867 kubelet[3489]: E1213 01:05:39.068282 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.068867 kubelet[3489]: W1213 01:05:39.068304 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.068867 kubelet[3489]: E1213 01:05:39.068327 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.068867 kubelet[3489]: E1213 01:05:39.068375 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.068867 kubelet[3489]: E1213 01:05:39.068687 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.070301 kubelet[3489]: E1213 01:05:39.070282 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.070432 kubelet[3489]: W1213 01:05:39.070417 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.070664 kubelet[3489]: E1213 01:05:39.070522 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.080132 kubelet[3489]: E1213 01:05:39.078736 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:39.080132 kubelet[3489]: W1213 01:05:39.078755 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:39.080132 kubelet[3489]: E1213 01:05:39.078778 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:39.080492 containerd[1818]: time="2024-12-13T01:05:39.079148815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2thv,Uid:040ec1d4-282b-45c0-a72e-2bd97bdc4265,Namespace:calico-system,Attempt:0,}" Dec 13 01:05:39.146604 containerd[1818]: time="2024-12-13T01:05:39.146393372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:05:39.147176 containerd[1818]: time="2024-12-13T01:05:39.147061287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:05:39.147830 containerd[1818]: time="2024-12-13T01:05:39.147157989Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:39.148974 containerd[1818]: time="2024-12-13T01:05:39.148200511Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:05:39.202653 containerd[1818]: time="2024-12-13T01:05:39.202514388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-8494cd77d8-5bb52,Uid:2d6aa8e8-a8fe-4ff7-8998-42cf0219ef5f,Namespace:calico-system,Attempt:0,} returns sandbox id \"79dfb940bf014670fd0769b8bf5c8aa2942b696ee80acdbbbe4ca8b34fea8282\"" Dec 13 01:05:39.205399 containerd[1818]: time="2024-12-13T01:05:39.205088644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Dec 13 01:05:39.245616 containerd[1818]: time="2024-12-13T01:05:39.245453918Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-m2thv,Uid:040ec1d4-282b-45c0-a72e-2bd97bdc4265,Namespace:calico-system,Attempt:0,} returns sandbox id \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\"" Dec 13 01:05:40.583710 kubelet[3489]: E1213 01:05:40.583123 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:40.616043 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778230163.mount: Deactivated successfully. Dec 13 01:05:41.694505 containerd[1818]: time="2024-12-13T01:05:41.694441677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:41.697402 containerd[1818]: time="2024-12-13T01:05:41.697091735Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=31343363" Dec 13 01:05:41.699738 containerd[1818]: time="2024-12-13T01:05:41.699639790Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:41.705410 containerd[1818]: time="2024-12-13T01:05:41.704429094Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:41.705410 containerd[1818]: time="2024-12-13T01:05:41.705256912Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.500122066s" Dec 13 01:05:41.705410 containerd[1818]: time="2024-12-13T01:05:41.705296812Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Dec 13 01:05:41.706321 containerd[1818]: time="2024-12-13T01:05:41.706290134Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 01:05:41.718616 containerd[1818]: time="2024-12-13T01:05:41.718380596Z" level=info msg="CreateContainer within sandbox \"79dfb940bf014670fd0769b8bf5c8aa2942b696ee80acdbbbe4ca8b34fea8282\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Dec 13 01:05:41.764532 containerd[1818]: time="2024-12-13T01:05:41.764475795Z" level=info msg="CreateContainer within sandbox \"79dfb940bf014670fd0769b8bf5c8aa2942b696ee80acdbbbe4ca8b34fea8282\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a0232728c3cf8e2de916278d0eb9dffe7950f5b0d0d2940817ea387d8fb6595\"" Dec 13 01:05:41.765497 containerd[1818]: time="2024-12-13T01:05:41.765268212Z" level=info msg="StartContainer for \"6a0232728c3cf8e2de916278d0eb9dffe7950f5b0d0d2940817ea387d8fb6595\"" Dec 13 01:05:41.854607 containerd[1818]: time="2024-12-13T01:05:41.854533946Z" level=info msg="StartContainer for \"6a0232728c3cf8e2de916278d0eb9dffe7950f5b0d0d2940817ea387d8fb6595\" returns successfully" Dec 13 01:05:42.586746 kubelet[3489]: E1213 01:05:42.584947 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:42.734356 kubelet[3489]: I1213 01:05:42.734306 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-8494cd77d8-5bb52" podStartSLOduration=2.232716408 podStartE2EDuration="4.734232505s" podCreationTimestamp="2024-12-13 01:05:38 +0000 UTC" firstStartedPulling="2024-12-13 01:05:39.204540332 +0000 UTC m=+23.745524632" lastFinishedPulling="2024-12-13 01:05:41.706056429 +0000 UTC m=+26.247040729" observedRunningTime="2024-12-13 01:05:42.731175439 +0000 UTC m=+27.272159839" watchObservedRunningTime="2024-12-13 01:05:42.734232505 +0000 UTC m=+27.275216805" Dec 13 01:05:42.746764 kubelet[3489]: E1213 01:05:42.746323 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.746764 kubelet[3489]: W1213 01:05:42.746352 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.746764 kubelet[3489]: E1213 01:05:42.746400 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.747711 kubelet[3489]: E1213 01:05:42.747335 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.747711 kubelet[3489]: W1213 01:05:42.747351 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.747711 kubelet[3489]: E1213 01:05:42.747372 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.747711 kubelet[3489]: E1213 01:05:42.747638 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.747711 kubelet[3489]: W1213 01:05:42.747650 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.747711 kubelet[3489]: E1213 01:05:42.747667 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.750261 kubelet[3489]: E1213 01:05:42.749361 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.750261 kubelet[3489]: W1213 01:05:42.749373 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.750261 kubelet[3489]: E1213 01:05:42.749396 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.750711 kubelet[3489]: E1213 01:05:42.750690 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.750800 kubelet[3489]: W1213 01:05:42.750707 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.750800 kubelet[3489]: E1213 01:05:42.750735 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.751090 kubelet[3489]: E1213 01:05:42.751030 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.751227 kubelet[3489]: W1213 01:05:42.751096 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.751227 kubelet[3489]: E1213 01:05:42.751119 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.752532 kubelet[3489]: E1213 01:05:42.751536 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.752532 kubelet[3489]: W1213 01:05:42.751548 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.752532 kubelet[3489]: E1213 01:05:42.751565 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.752532 kubelet[3489]: E1213 01:05:42.752112 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.752532 kubelet[3489]: W1213 01:05:42.752124 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.752532 kubelet[3489]: E1213 01:05:42.752138 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.753258 kubelet[3489]: E1213 01:05:42.752937 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.753258 kubelet[3489]: W1213 01:05:42.752952 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.753258 kubelet[3489]: E1213 01:05:42.752968 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.753626 kubelet[3489]: E1213 01:05:42.753504 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.753626 kubelet[3489]: W1213 01:05:42.753517 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.753626 kubelet[3489]: E1213 01:05:42.753534 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.754010 kubelet[3489]: E1213 01:05:42.753927 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.754010 kubelet[3489]: W1213 01:05:42.753939 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.754010 kubelet[3489]: E1213 01:05:42.753956 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.754467 kubelet[3489]: E1213 01:05:42.754376 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.754467 kubelet[3489]: W1213 01:05:42.754390 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.754467 kubelet[3489]: E1213 01:05:42.754406 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.754830 kubelet[3489]: E1213 01:05:42.754821 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.754945 kubelet[3489]: W1213 01:05:42.754889 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.754945 kubelet[3489]: E1213 01:05:42.754904 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.755363 kubelet[3489]: E1213 01:05:42.755202 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.755363 kubelet[3489]: W1213 01:05:42.755287 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.755363 kubelet[3489]: E1213 01:05:42.755308 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.756181 kubelet[3489]: E1213 01:05:42.756086 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.756181 kubelet[3489]: W1213 01:05:42.756101 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.756181 kubelet[3489]: E1213 01:05:42.756118 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.769386 kubelet[3489]: E1213 01:05:42.769281 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.769386 kubelet[3489]: W1213 01:05:42.769321 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.769386 kubelet[3489]: E1213 01:05:42.769349 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.770096 kubelet[3489]: E1213 01:05:42.770075 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.770303 kubelet[3489]: W1213 01:05:42.770188 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.770303 kubelet[3489]: E1213 01:05:42.770248 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.770688 kubelet[3489]: E1213 01:05:42.770673 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.770855 kubelet[3489]: W1213 01:05:42.770783 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.770855 kubelet[3489]: E1213 01:05:42.770815 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.771244 kubelet[3489]: E1213 01:05:42.771202 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.771350 kubelet[3489]: W1213 01:05:42.771339 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.771554 kubelet[3489]: E1213 01:05:42.771442 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.772419 kubelet[3489]: E1213 01:05:42.772404 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.772566 kubelet[3489]: W1213 01:05:42.772505 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.772940 kubelet[3489]: E1213 01:05:42.772741 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.773219 kubelet[3489]: E1213 01:05:42.773194 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.773430 kubelet[3489]: W1213 01:05:42.773259 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.773430 kubelet[3489]: E1213 01:05:42.773413 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.774119 kubelet[3489]: E1213 01:05:42.773961 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.774119 kubelet[3489]: W1213 01:05:42.773971 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.774119 kubelet[3489]: E1213 01:05:42.774051 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.774485 kubelet[3489]: E1213 01:05:42.774450 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.774485 kubelet[3489]: W1213 01:05:42.774464 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.774875 kubelet[3489]: E1213 01:05:42.774578 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.774875 kubelet[3489]: E1213 01:05:42.774745 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.774875 kubelet[3489]: W1213 01:05:42.774756 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.774875 kubelet[3489]: E1213 01:05:42.774858 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.775304 kubelet[3489]: E1213 01:05:42.775288 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.775304 kubelet[3489]: W1213 01:05:42.775303 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.775608 kubelet[3489]: E1213 01:05:42.775328 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.775608 kubelet[3489]: E1213 01:05:42.775572 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.775608 kubelet[3489]: W1213 01:05:42.775584 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.775608 kubelet[3489]: E1213 01:05:42.775605 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.776087 kubelet[3489]: E1213 01:05:42.776060 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.776087 kubelet[3489]: W1213 01:05:42.776078 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.776420 kubelet[3489]: E1213 01:05:42.776167 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.776980 kubelet[3489]: E1213 01:05:42.776491 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.776980 kubelet[3489]: W1213 01:05:42.776505 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.776980 kubelet[3489]: E1213 01:05:42.776592 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.776980 kubelet[3489]: E1213 01:05:42.776757 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.776980 kubelet[3489]: W1213 01:05:42.776769 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.776980 kubelet[3489]: E1213 01:05:42.776798 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.777305 kubelet[3489]: E1213 01:05:42.777173 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.777305 kubelet[3489]: W1213 01:05:42.777185 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.777305 kubelet[3489]: E1213 01:05:42.777220 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.779230 kubelet[3489]: E1213 01:05:42.777459 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.779230 kubelet[3489]: W1213 01:05:42.777471 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.779230 kubelet[3489]: E1213 01:05:42.777498 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.779230 kubelet[3489]: E1213 01:05:42.778781 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.779230 kubelet[3489]: W1213 01:05:42.778792 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.779230 kubelet[3489]: E1213 01:05:42.778816 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:42.779230 kubelet[3489]: E1213 01:05:42.779015 3489 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 01:05:42.779230 kubelet[3489]: W1213 01:05:42.779025 3489 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 01:05:42.779230 kubelet[3489]: E1213 01:05:42.779039 3489 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 01:05:43.194237 containerd[1818]: time="2024-12-13T01:05:43.194151069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:43.196304 containerd[1818]: time="2024-12-13T01:05:43.196228314Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=5362121" Dec 13 01:05:43.201931 containerd[1818]: time="2024-12-13T01:05:43.201850736Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:43.206984 containerd[1818]: time="2024-12-13T01:05:43.206913846Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:43.208364 containerd[1818]: time="2024-12-13T01:05:43.207696063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.501363228s" Dec 13 01:05:43.208364 containerd[1818]: time="2024-12-13T01:05:43.207749464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 01:05:43.210769 containerd[1818]: time="2024-12-13T01:05:43.210722228Z" level=info msg="CreateContainer within sandbox \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 01:05:43.252045 containerd[1818]: time="2024-12-13T01:05:43.251983222Z" level=info msg="CreateContainer within sandbox \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b084a4b03bbebbb80cfc79690fc1b3482c4a0a54fe02a493b5001e1be1f2900e\"" Dec 13 01:05:43.253685 containerd[1818]: time="2024-12-13T01:05:43.252787040Z" level=info msg="StartContainer for \"b084a4b03bbebbb80cfc79690fc1b3482c4a0a54fe02a493b5001e1be1f2900e\"" Dec 13 01:05:43.345102 containerd[1818]: time="2024-12-13T01:05:43.345048139Z" level=info msg="StartContainer for \"b084a4b03bbebbb80cfc79690fc1b3482c4a0a54fe02a493b5001e1be1f2900e\" returns successfully" Dec 13 01:05:43.388965 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b084a4b03bbebbb80cfc79690fc1b3482c4a0a54fe02a493b5001e1be1f2900e-rootfs.mount: Deactivated successfully. Dec 13 01:05:43.718501 kubelet[3489]: I1213 01:05:43.717101 3489 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:05:44.716398 kubelet[3489]: E1213 01:05:44.583178 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:44.786300 containerd[1818]: time="2024-12-13T01:05:44.786170945Z" level=info msg="shim disconnected" id=b084a4b03bbebbb80cfc79690fc1b3482c4a0a54fe02a493b5001e1be1f2900e namespace=k8s.io Dec 13 01:05:44.786300 containerd[1818]: time="2024-12-13T01:05:44.786298248Z" level=warning msg="cleaning up after shim disconnected" id=b084a4b03bbebbb80cfc79690fc1b3482c4a0a54fe02a493b5001e1be1f2900e namespace=k8s.io Dec 13 01:05:44.786300 containerd[1818]: time="2024-12-13T01:05:44.786312148Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:05:45.727735 containerd[1818]: time="2024-12-13T01:05:45.726980697Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 01:05:46.583427 kubelet[3489]: E1213 01:05:46.583366 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:47.094573 kubelet[3489]: I1213 01:05:47.094182 3489 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 01:05:48.583600 kubelet[3489]: E1213 01:05:48.583237 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:50.583460 kubelet[3489]: E1213 01:05:50.583405 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:51.051361 containerd[1818]: time="2024-12-13T01:05:51.051298276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:51.053703 containerd[1818]: time="2024-12-13T01:05:51.053628426Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 01:05:51.061955 containerd[1818]: time="2024-12-13T01:05:51.061878705Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:51.067468 containerd[1818]: time="2024-12-13T01:05:51.067392924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:05:51.068895 containerd[1818]: time="2024-12-13T01:05:51.068177141Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.341127543s" Dec 13 01:05:51.068895 containerd[1818]: time="2024-12-13T01:05:51.068246742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 01:05:51.071317 containerd[1818]: time="2024-12-13T01:05:51.071281008Z" level=info msg="CreateContainer within sandbox \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 01:05:51.115172 containerd[1818]: time="2024-12-13T01:05:51.115118556Z" level=info msg="CreateContainer within sandbox \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"018eb15e85ab719b96661dd32d513aeae855024aaa3ac82033f4f0608a5ce9ac\"" Dec 13 01:05:51.116065 containerd[1818]: time="2024-12-13T01:05:51.115961375Z" level=info msg="StartContainer for \"018eb15e85ab719b96661dd32d513aeae855024aaa3ac82033f4f0608a5ce9ac\"" Dec 13 01:05:51.193407 containerd[1818]: time="2024-12-13T01:05:51.193344349Z" level=info msg="StartContainer for \"018eb15e85ab719b96661dd32d513aeae855024aaa3ac82033f4f0608a5ce9ac\" returns successfully" Dec 13 01:05:52.583611 kubelet[3489]: E1213 01:05:52.583233 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:52.738992 containerd[1818]: time="2024-12-13T01:05:52.738898892Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 01:05:52.765478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-018eb15e85ab719b96661dd32d513aeae855024aaa3ac82033f4f0608a5ce9ac-rootfs.mount: Deactivated successfully. Dec 13 01:05:52.806394 kubelet[3489]: I1213 01:05:52.806348 3489 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 01:05:52.844390 kubelet[3489]: I1213 01:05:52.844182 3489 topology_manager.go:215] "Topology Admit Handler" podUID="67aac1e3-277c-4ca2-9d09-e223acfdf7de" podNamespace="kube-system" podName="coredns-76f75df574-4p62t" Dec 13 01:05:52.852122 kubelet[3489]: I1213 01:05:52.852083 3489 topology_manager.go:215] "Topology Admit Handler" podUID="650bc620-b65e-43c0-b8d1-820767c4d25d" podNamespace="calico-system" podName="calico-kube-controllers-57f84b8987-99kng" Dec 13 01:05:52.856329 kubelet[3489]: I1213 01:05:52.855149 3489 topology_manager.go:215] "Topology Admit Handler" podUID="b105b0cb-657e-4fdc-9682-5e2264dac1c4" podNamespace="kube-system" podName="coredns-76f75df574-j27h4" Dec 13 01:05:52.858912 kubelet[3489]: I1213 01:05:52.858624 3489 topology_manager.go:215] "Topology Admit Handler" podUID="9c1afa73-315a-455e-b964-9dea2e760170" podNamespace="calico-apiserver" podName="calico-apiserver-85d899fc85-5ghj6" Dec 13 01:05:52.858912 kubelet[3489]: I1213 01:05:52.858803 3489 topology_manager.go:215] "Topology Admit Handler" podUID="356134e0-757a-4c9e-82e3-a756fd989077" podNamespace="calico-apiserver" podName="calico-apiserver-85d899fc85-hbn4w" Dec 13 01:05:52.953002 kubelet[3489]: I1213 01:05:52.952944 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4zm7\" (UniqueName: \"kubernetes.io/projected/356134e0-757a-4c9e-82e3-a756fd989077-kube-api-access-t4zm7\") pod \"calico-apiserver-85d899fc85-hbn4w\" (UID: \"356134e0-757a-4c9e-82e3-a756fd989077\") " pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" Dec 13 01:05:52.953002 kubelet[3489]: I1213 01:05:52.953015 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/67aac1e3-277c-4ca2-9d09-e223acfdf7de-config-volume\") pod \"coredns-76f75df574-4p62t\" (UID: \"67aac1e3-277c-4ca2-9d09-e223acfdf7de\") " pod="kube-system/coredns-76f75df574-4p62t" Dec 13 01:05:52.953002 kubelet[3489]: I1213 01:05:52.953047 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/650bc620-b65e-43c0-b8d1-820767c4d25d-tigera-ca-bundle\") pod \"calico-kube-controllers-57f84b8987-99kng\" (UID: \"650bc620-b65e-43c0-b8d1-820767c4d25d\") " pod="calico-system/calico-kube-controllers-57f84b8987-99kng" Dec 13 01:05:52.953366 kubelet[3489]: I1213 01:05:52.953075 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bj5zr\" (UniqueName: \"kubernetes.io/projected/b105b0cb-657e-4fdc-9682-5e2264dac1c4-kube-api-access-bj5zr\") pod \"coredns-76f75df574-j27h4\" (UID: \"b105b0cb-657e-4fdc-9682-5e2264dac1c4\") " pod="kube-system/coredns-76f75df574-j27h4" Dec 13 01:05:52.953366 kubelet[3489]: I1213 01:05:52.953105 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9c1afa73-315a-455e-b964-9dea2e760170-calico-apiserver-certs\") pod \"calico-apiserver-85d899fc85-5ghj6\" (UID: \"9c1afa73-315a-455e-b964-9dea2e760170\") " pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" Dec 13 01:05:52.953366 kubelet[3489]: I1213 01:05:52.953132 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/356134e0-757a-4c9e-82e3-a756fd989077-calico-apiserver-certs\") pod \"calico-apiserver-85d899fc85-hbn4w\" (UID: \"356134e0-757a-4c9e-82e3-a756fd989077\") " pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" Dec 13 01:05:52.953366 kubelet[3489]: I1213 01:05:52.953165 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f29fr\" (UniqueName: \"kubernetes.io/projected/9c1afa73-315a-455e-b964-9dea2e760170-kube-api-access-f29fr\") pod \"calico-apiserver-85d899fc85-5ghj6\" (UID: \"9c1afa73-315a-455e-b964-9dea2e760170\") " pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" Dec 13 01:05:52.953366 kubelet[3489]: I1213 01:05:52.953196 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2n8l8\" (UniqueName: \"kubernetes.io/projected/67aac1e3-277c-4ca2-9d09-e223acfdf7de-kube-api-access-2n8l8\") pod \"coredns-76f75df574-4p62t\" (UID: \"67aac1e3-277c-4ca2-9d09-e223acfdf7de\") " pod="kube-system/coredns-76f75df574-4p62t" Dec 13 01:05:52.953526 kubelet[3489]: I1213 01:05:52.953235 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b105b0cb-657e-4fdc-9682-5e2264dac1c4-config-volume\") pod \"coredns-76f75df574-j27h4\" (UID: \"b105b0cb-657e-4fdc-9682-5e2264dac1c4\") " pod="kube-system/coredns-76f75df574-j27h4" Dec 13 01:05:52.953526 kubelet[3489]: I1213 01:05:52.953266 3489 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnbh2\" (UniqueName: \"kubernetes.io/projected/650bc620-b65e-43c0-b8d1-820767c4d25d-kube-api-access-hnbh2\") pod \"calico-kube-controllers-57f84b8987-99kng\" (UID: \"650bc620-b65e-43c0-b8d1-820767c4d25d\") " pod="calico-system/calico-kube-controllers-57f84b8987-99kng" Dec 13 01:05:53.170424 containerd[1818]: time="2024-12-13T01:05:53.170023227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4p62t,Uid:67aac1e3-277c-4ca2-9d09-e223acfdf7de,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:53.170424 containerd[1818]: time="2024-12-13T01:05:53.170023427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f84b8987-99kng,Uid:650bc620-b65e-43c0-b8d1-820767c4d25d,Namespace:calico-system,Attempt:0,}" Dec 13 01:05:53.180091 containerd[1818]: time="2024-12-13T01:05:53.180044744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j27h4,Uid:b105b0cb-657e-4fdc-9682-5e2264dac1c4,Namespace:kube-system,Attempt:0,}" Dec 13 01:05:53.188900 containerd[1818]: time="2024-12-13T01:05:53.188853135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-hbn4w,Uid:356134e0-757a-4c9e-82e3-a756fd989077,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:05:53.197599 containerd[1818]: time="2024-12-13T01:05:53.197561023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-5ghj6,Uid:9c1afa73-315a-455e-b964-9dea2e760170,Namespace:calico-apiserver,Attempt:0,}" Dec 13 01:05:54.484716 containerd[1818]: time="2024-12-13T01:05:54.484637392Z" level=info msg="shim disconnected" id=018eb15e85ab719b96661dd32d513aeae855024aaa3ac82033f4f0608a5ce9ac namespace=k8s.io Dec 13 01:05:54.484716 containerd[1818]: time="2024-12-13T01:05:54.484705894Z" level=warning msg="cleaning up after shim disconnected" id=018eb15e85ab719b96661dd32d513aeae855024aaa3ac82033f4f0608a5ce9ac namespace=k8s.io Dec 13 01:05:54.484716 containerd[1818]: time="2024-12-13T01:05:54.484717894Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 01:05:54.609237 containerd[1818]: time="2024-12-13T01:05:54.609134488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6cd5,Uid:52cded57-51a5-4d1e-9829-02d4ac1d0d2d,Namespace:calico-system,Attempt:0,}" Dec 13 01:05:54.782013 containerd[1818]: time="2024-12-13T01:05:54.781701325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 01:05:54.855045 containerd[1818]: time="2024-12-13T01:05:54.854968011Z" level=error msg="Failed to destroy network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.855828 containerd[1818]: time="2024-12-13T01:05:54.855782329Z" level=error msg="encountered an error cleaning up failed sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.856691 containerd[1818]: time="2024-12-13T01:05:54.856006734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j27h4,Uid:b105b0cb-657e-4fdc-9682-5e2264dac1c4,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.856929 kubelet[3489]: E1213 01:05:54.856367 3489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.856929 kubelet[3489]: E1213 01:05:54.856489 3489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j27h4" Dec 13 01:05:54.856929 kubelet[3489]: E1213 01:05:54.856529 3489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-j27h4" Dec 13 01:05:54.858067 kubelet[3489]: E1213 01:05:54.856653 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-j27h4_kube-system(b105b0cb-657e-4fdc-9682-5e2264dac1c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-j27h4_kube-system(b105b0cb-657e-4fdc-9682-5e2264dac1c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j27h4" podUID="b105b0cb-657e-4fdc-9682-5e2264dac1c4" Dec 13 01:05:54.897472 containerd[1818]: time="2024-12-13T01:05:54.897399930Z" level=error msg="Failed to destroy network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.898056 containerd[1818]: time="2024-12-13T01:05:54.897894441Z" level=error msg="encountered an error cleaning up failed sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.898056 containerd[1818]: time="2024-12-13T01:05:54.897980743Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4p62t,Uid:67aac1e3-277c-4ca2-9d09-e223acfdf7de,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.898624 kubelet[3489]: E1213 01:05:54.898451 3489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.898624 kubelet[3489]: E1213 01:05:54.898538 3489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4p62t" Dec 13 01:05:54.898624 kubelet[3489]: E1213 01:05:54.898570 3489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-4p62t" Dec 13 01:05:54.899495 kubelet[3489]: E1213 01:05:54.899429 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-4p62t_kube-system(67aac1e3-277c-4ca2-9d09-e223acfdf7de)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-4p62t_kube-system(67aac1e3-277c-4ca2-9d09-e223acfdf7de)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4p62t" podUID="67aac1e3-277c-4ca2-9d09-e223acfdf7de" Dec 13 01:05:54.913684 containerd[1818]: time="2024-12-13T01:05:54.913437277Z" level=error msg="Failed to destroy network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.915760 containerd[1818]: time="2024-12-13T01:05:54.915544923Z" level=error msg="encountered an error cleaning up failed sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.916156 containerd[1818]: time="2024-12-13T01:05:54.915870230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f84b8987-99kng,Uid:650bc620-b65e-43c0-b8d1-820767c4d25d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.916827 kubelet[3489]: E1213 01:05:54.916722 3489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.916949 kubelet[3489]: E1213 01:05:54.916867 3489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57f84b8987-99kng" Dec 13 01:05:54.916949 kubelet[3489]: E1213 01:05:54.916903 3489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-57f84b8987-99kng" Dec 13 01:05:54.917500 kubelet[3489]: E1213 01:05:54.916993 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-57f84b8987-99kng_calico-system(650bc620-b65e-43c0-b8d1-820767c4d25d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-57f84b8987-99kng_calico-system(650bc620-b65e-43c0-b8d1-820767c4d25d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57f84b8987-99kng" podUID="650bc620-b65e-43c0-b8d1-820767c4d25d" Dec 13 01:05:54.922807 containerd[1818]: time="2024-12-13T01:05:54.922610576Z" level=error msg="Failed to destroy network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.923690 containerd[1818]: time="2024-12-13T01:05:54.923638298Z" level=error msg="encountered an error cleaning up failed sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.924047 containerd[1818]: time="2024-12-13T01:05:54.923926004Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-hbn4w,Uid:356134e0-757a-4c9e-82e3-a756fd989077,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.925470 kubelet[3489]: E1213 01:05:54.924764 3489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.925470 kubelet[3489]: E1213 01:05:54.924933 3489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" Dec 13 01:05:54.925470 kubelet[3489]: E1213 01:05:54.925057 3489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" Dec 13 01:05:54.925752 kubelet[3489]: E1213 01:05:54.925378 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d899fc85-hbn4w_calico-apiserver(356134e0-757a-4c9e-82e3-a756fd989077)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d899fc85-hbn4w_calico-apiserver(356134e0-757a-4c9e-82e3-a756fd989077)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" podUID="356134e0-757a-4c9e-82e3-a756fd989077" Dec 13 01:05:54.929346 containerd[1818]: time="2024-12-13T01:05:54.929306721Z" level=error msg="Failed to destroy network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.930032 containerd[1818]: time="2024-12-13T01:05:54.929899334Z" level=error msg="encountered an error cleaning up failed sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.930193 containerd[1818]: time="2024-12-13T01:05:54.930133639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-5ghj6,Uid:9c1afa73-315a-455e-b964-9dea2e760170,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.930814 kubelet[3489]: E1213 01:05:54.930589 3489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.930814 kubelet[3489]: E1213 01:05:54.930667 3489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" Dec 13 01:05:54.930814 kubelet[3489]: E1213 01:05:54.930700 3489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" Dec 13 01:05:54.930984 kubelet[3489]: E1213 01:05:54.930775 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-85d899fc85-5ghj6_calico-apiserver(9c1afa73-315a-455e-b964-9dea2e760170)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-85d899fc85-5ghj6_calico-apiserver(9c1afa73-315a-455e-b964-9dea2e760170)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" podUID="9c1afa73-315a-455e-b964-9dea2e760170" Dec 13 01:05:54.941492 containerd[1818]: time="2024-12-13T01:05:54.941438484Z" level=error msg="Failed to destroy network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.941903 containerd[1818]: time="2024-12-13T01:05:54.941870393Z" level=error msg="encountered an error cleaning up failed sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.942003 containerd[1818]: time="2024-12-13T01:05:54.941971595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6cd5,Uid:52cded57-51a5-4d1e-9829-02d4ac1d0d2d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.942295 kubelet[3489]: E1213 01:05:54.942269 3489 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:54.942398 kubelet[3489]: E1213 01:05:54.942334 3489 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:54.942398 kubelet[3489]: E1213 01:05:54.942371 3489 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-z6cd5" Dec 13 01:05:54.942484 kubelet[3489]: E1213 01:05:54.942466 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-z6cd5_calico-system(52cded57-51a5-4d1e-9829-02d4ac1d0d2d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-z6cd5_calico-system(52cded57-51a5-4d1e-9829-02d4ac1d0d2d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:55.573067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561-shm.mount: Deactivated successfully. Dec 13 01:05:55.573987 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc-shm.mount: Deactivated successfully. Dec 13 01:05:55.574201 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d-shm.mount: Deactivated successfully. Dec 13 01:05:55.574397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d-shm.mount: Deactivated successfully. Dec 13 01:05:55.772165 kubelet[3489]: I1213 01:05:55.772048 3489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:05:55.774361 containerd[1818]: time="2024-12-13T01:05:55.774146514Z" level=info msg="StopPodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\"" Dec 13 01:05:55.775962 containerd[1818]: time="2024-12-13T01:05:55.775174736Z" level=info msg="Ensure that sandbox 6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9 in task-service has been cleanup successfully" Dec 13 01:05:55.776887 kubelet[3489]: I1213 01:05:55.776839 3489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:05:55.781771 containerd[1818]: time="2024-12-13T01:05:55.780286747Z" level=info msg="StopPodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\"" Dec 13 01:05:55.781771 containerd[1818]: time="2024-12-13T01:05:55.780517352Z" level=info msg="Ensure that sandbox 5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561 in task-service has been cleanup successfully" Dec 13 01:05:55.797961 kubelet[3489]: I1213 01:05:55.797821 3489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:05:55.800734 containerd[1818]: time="2024-12-13T01:05:55.799057954Z" level=info msg="StopPodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\"" Dec 13 01:05:55.800734 containerd[1818]: time="2024-12-13T01:05:55.799343660Z" level=info msg="Ensure that sandbox 31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43 in task-service has been cleanup successfully" Dec 13 01:05:55.803657 kubelet[3489]: I1213 01:05:55.803258 3489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:05:55.807223 containerd[1818]: time="2024-12-13T01:05:55.804238566Z" level=info msg="StopPodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\"" Dec 13 01:05:55.808611 containerd[1818]: time="2024-12-13T01:05:55.808363455Z" level=info msg="Ensure that sandbox d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d in task-service has been cleanup successfully" Dec 13 01:05:55.808712 kubelet[3489]: I1213 01:05:55.808481 3489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:05:55.810043 containerd[1818]: time="2024-12-13T01:05:55.809962490Z" level=info msg="StopPodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\"" Dec 13 01:05:55.811466 kubelet[3489]: I1213 01:05:55.811387 3489 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:05:55.814227 containerd[1818]: time="2024-12-13T01:05:55.814097879Z" level=info msg="Ensure that sandbox dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc in task-service has been cleanup successfully" Dec 13 01:05:55.814823 containerd[1818]: time="2024-12-13T01:05:55.814624891Z" level=info msg="StopPodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\"" Dec 13 01:05:55.814901 containerd[1818]: time="2024-12-13T01:05:55.814813095Z" level=info msg="Ensure that sandbox 7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d in task-service has been cleanup successfully" Dec 13 01:05:55.952613 containerd[1818]: time="2024-12-13T01:05:55.951537355Z" level=error msg="StopPodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" failed" error="failed to destroy network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:55.952793 kubelet[3489]: E1213 01:05:55.951975 3489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:05:55.952793 kubelet[3489]: E1213 01:05:55.952096 3489 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9"} Dec 13 01:05:55.952793 kubelet[3489]: E1213 01:05:55.952152 3489 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:05:55.952793 kubelet[3489]: E1213 01:05:55.952199 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"52cded57-51a5-4d1e-9829-02d4ac1d0d2d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-z6cd5" podUID="52cded57-51a5-4d1e-9829-02d4ac1d0d2d" Dec 13 01:05:55.962170 containerd[1818]: time="2024-12-13T01:05:55.961505971Z" level=error msg="StopPodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" failed" error="failed to destroy network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:55.962381 kubelet[3489]: E1213 01:05:55.961881 3489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:05:55.962381 kubelet[3489]: E1213 01:05:55.961946 3489 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561"} Dec 13 01:05:55.962381 kubelet[3489]: E1213 01:05:55.961997 3489 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"356134e0-757a-4c9e-82e3-a756fd989077\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:05:55.962381 kubelet[3489]: E1213 01:05:55.962047 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"356134e0-757a-4c9e-82e3-a756fd989077\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" podUID="356134e0-757a-4c9e-82e3-a756fd989077" Dec 13 01:05:55.966307 containerd[1818]: time="2024-12-13T01:05:55.965920267Z" level=error msg="StopPodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" failed" error="failed to destroy network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:55.966440 kubelet[3489]: E1213 01:05:55.966284 3489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:05:55.966440 kubelet[3489]: E1213 01:05:55.966346 3489 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d"} Dec 13 01:05:55.966440 kubelet[3489]: E1213 01:05:55.966417 3489 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"67aac1e3-277c-4ca2-9d09-e223acfdf7de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:05:55.966605 kubelet[3489]: E1213 01:05:55.966470 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"67aac1e3-277c-4ca2-9d09-e223acfdf7de\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-4p62t" podUID="67aac1e3-277c-4ca2-9d09-e223acfdf7de" Dec 13 01:05:55.978228 containerd[1818]: time="2024-12-13T01:05:55.977188911Z" level=error msg="StopPodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" failed" error="failed to destroy network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:55.978397 kubelet[3489]: E1213 01:05:55.977605 3489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:05:55.978397 kubelet[3489]: E1213 01:05:55.977674 3489 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d"} Dec 13 01:05:55.978397 kubelet[3489]: E1213 01:05:55.977747 3489 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b105b0cb-657e-4fdc-9682-5e2264dac1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:05:55.978397 kubelet[3489]: E1213 01:05:55.977802 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b105b0cb-657e-4fdc-9682-5e2264dac1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-j27h4" podUID="b105b0cb-657e-4fdc-9682-5e2264dac1c4" Dec 13 01:05:55.981078 containerd[1818]: time="2024-12-13T01:05:55.980975293Z" level=error msg="StopPodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" failed" error="failed to destroy network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:55.981509 kubelet[3489]: E1213 01:05:55.981486 3489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:05:55.981666 kubelet[3489]: E1213 01:05:55.981655 3489 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc"} Dec 13 01:05:55.981776 kubelet[3489]: E1213 01:05:55.981767 3489 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"650bc620-b65e-43c0-b8d1-820767c4d25d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:05:55.981928 kubelet[3489]: E1213 01:05:55.981916 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"650bc620-b65e-43c0-b8d1-820767c4d25d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-57f84b8987-99kng" podUID="650bc620-b65e-43c0-b8d1-820767c4d25d" Dec 13 01:05:55.982329 containerd[1818]: time="2024-12-13T01:05:55.982294221Z" level=error msg="StopPodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" failed" error="failed to destroy network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 01:05:55.982538 kubelet[3489]: E1213 01:05:55.982516 3489 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:05:55.982622 kubelet[3489]: E1213 01:05:55.982550 3489 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43"} Dec 13 01:05:55.982622 kubelet[3489]: E1213 01:05:55.982592 3489 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c1afa73-315a-455e-b964-9dea2e760170\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Dec 13 01:05:55.982722 kubelet[3489]: E1213 01:05:55.982627 3489 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c1afa73-315a-455e-b964-9dea2e760170\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" podUID="9c1afa73-315a-455e-b964-9dea2e760170" Dec 13 01:06:03.179218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1993077182.mount: Deactivated successfully. Dec 13 01:06:03.233967 containerd[1818]: time="2024-12-13T01:06:03.233882432Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:03.236189 containerd[1818]: time="2024-12-13T01:06:03.236107779Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 01:06:03.239821 containerd[1818]: time="2024-12-13T01:06:03.239748956Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:03.244938 containerd[1818]: time="2024-12-13T01:06:03.244864565Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:03.245749 containerd[1818]: time="2024-12-13T01:06:03.245524479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.463762053s" Dec 13 01:06:03.245749 containerd[1818]: time="2024-12-13T01:06:03.245576880Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 01:06:03.259333 containerd[1818]: time="2024-12-13T01:06:03.258027445Z" level=info msg="CreateContainer within sandbox \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 01:06:03.310822 containerd[1818]: time="2024-12-13T01:06:03.310757965Z" level=info msg="CreateContainer within sandbox \"7ed5a9a5ffa8557e8cb1509ac53877a0b9efdd03036efe01f5d9d5de551d6dca\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2026d73656ebab7ad3f0509a3bb114de11789b9105904b322e0cd2a8ddfcdfb2\"" Dec 13 01:06:03.311718 containerd[1818]: time="2024-12-13T01:06:03.311681584Z" level=info msg="StartContainer for \"2026d73656ebab7ad3f0509a3bb114de11789b9105904b322e0cd2a8ddfcdfb2\"" Dec 13 01:06:03.383459 containerd[1818]: time="2024-12-13T01:06:03.383271105Z" level=info msg="StartContainer for \"2026d73656ebab7ad3f0509a3bb114de11789b9105904b322e0cd2a8ddfcdfb2\" returns successfully" Dec 13 01:06:03.674572 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 01:06:03.674820 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 01:06:03.868754 kubelet[3489]: I1213 01:06:03.867719 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-m2thv" podStartSLOduration=1.869354371 podStartE2EDuration="25.867643693s" podCreationTimestamp="2024-12-13 01:05:38 +0000 UTC" firstStartedPulling="2024-12-13 01:05:39.247877671 +0000 UTC m=+23.788861971" lastFinishedPulling="2024-12-13 01:06:03.246166893 +0000 UTC m=+47.787151293" observedRunningTime="2024-12-13 01:06:03.864113318 +0000 UTC m=+48.405097618" watchObservedRunningTime="2024-12-13 01:06:03.867643693 +0000 UTC m=+48.408627993" Dec 13 01:06:05.410251 kernel: bpftool[4684]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 01:06:05.716372 systemd-networkd[1391]: vxlan.calico: Link UP Dec 13 01:06:05.716384 systemd-networkd[1391]: vxlan.calico: Gained carrier Dec 13 01:06:06.584256 containerd[1818]: time="2024-12-13T01:06:06.583864285Z" level=info msg="StopPodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\"" Dec 13 01:06:06.585591 containerd[1818]: time="2024-12-13T01:06:06.583864385Z" level=info msg="StopPodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\"" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.667 [INFO][4791] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.668 [INFO][4791] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" iface="eth0" netns="/var/run/netns/cni-66edb6da-4488-c48a-f5e5-39f83d616b45" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.668 [INFO][4791] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" iface="eth0" netns="/var/run/netns/cni-66edb6da-4488-c48a-f5e5-39f83d616b45" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.668 [INFO][4791] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" iface="eth0" netns="/var/run/netns/cni-66edb6da-4488-c48a-f5e5-39f83d616b45" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.668 [INFO][4791] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.671 [INFO][4791] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.707 [INFO][4803] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.707 [INFO][4803] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.707 [INFO][4803] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.717 [WARNING][4803] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.717 [INFO][4803] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.720 [INFO][4803] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:06.724751 containerd[1818]: 2024-12-13 01:06:06.722 [INFO][4791] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:06.728136 containerd[1818]: time="2024-12-13T01:06:06.725383091Z" level=info msg="TearDown network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" successfully" Dec 13 01:06:06.728136 containerd[1818]: time="2024-12-13T01:06:06.725426792Z" level=info msg="StopPodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" returns successfully" Dec 13 01:06:06.732230 containerd[1818]: time="2024-12-13T01:06:06.731089412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j27h4,Uid:b105b0cb-657e-4fdc-9682-5e2264dac1c4,Namespace:kube-system,Attempt:1,}" Dec 13 01:06:06.733030 systemd[1]: run-netns-cni\x2d66edb6da\x2d4488\x2dc48a\x2df5e5\x2d39f83d616b45.mount: Deactivated successfully. Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.672 [INFO][4792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.672 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" iface="eth0" netns="/var/run/netns/cni-250b3295-71a7-f860-aa97-4360ee7e35c3" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.673 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" iface="eth0" netns="/var/run/netns/cni-250b3295-71a7-f860-aa97-4360ee7e35c3" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.673 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" iface="eth0" netns="/var/run/netns/cni-250b3295-71a7-f860-aa97-4360ee7e35c3" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.673 [INFO][4792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.673 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.716 [INFO][4804] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.716 [INFO][4804] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.720 [INFO][4804] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.728 [WARNING][4804] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.728 [INFO][4804] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.732 [INFO][4804] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:06.737853 containerd[1818]: 2024-12-13 01:06:06.736 [INFO][4792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:06.742022 containerd[1818]: time="2024-12-13T01:06:06.738284965Z" level=info msg="TearDown network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" successfully" Dec 13 01:06:06.742022 containerd[1818]: time="2024-12-13T01:06:06.738321165Z" level=info msg="StopPodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" returns successfully" Dec 13 01:06:06.743662 containerd[1818]: time="2024-12-13T01:06:06.743329272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f84b8987-99kng,Uid:650bc620-b65e-43c0-b8d1-820767c4d25d,Namespace:calico-system,Attempt:1,}" Dec 13 01:06:06.744277 systemd[1]: run-netns-cni\x2d250b3295\x2d71a7\x2df860\x2daa97\x2d4360ee7e35c3.mount: Deactivated successfully. Dec 13 01:06:06.973383 systemd-networkd[1391]: cali1c62241164c: Link UP Dec 13 01:06:06.977332 systemd-networkd[1391]: cali1c62241164c: Gained carrier Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.859 [INFO][4817] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0 coredns-76f75df574- kube-system b105b0cb-657e-4fdc-9682-5e2264dac1c4 756 0 2024-12-13 01:05:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-672c6884da coredns-76f75df574-j27h4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali1c62241164c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.859 [INFO][4817] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.909 [INFO][4840] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" HandleID="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.921 [INFO][4840] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" HandleID="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00042b9e0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-672c6884da", "pod":"coredns-76f75df574-j27h4", "timestamp":"2024-12-13 01:06:06.909479401 +0000 UTC"}, Hostname:"ci-4081.2.1-a-672c6884da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.921 [INFO][4840] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.921 [INFO][4840] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.921 [INFO][4840] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-672c6884da' Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.923 [INFO][4840] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.928 [INFO][4840] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.939 [INFO][4840] ipam/ipam.go 489: Trying affinity for 192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.941 [INFO][4840] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.943 [INFO][4840] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.943 [INFO][4840] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.945 [INFO][4840] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.951 [INFO][4840] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.959 [INFO][4840] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.193/26] block=192.168.43.192/26 handle="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.959 [INFO][4840] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.193/26] handle="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.959 [INFO][4840] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:07.005760 containerd[1818]: 2024-12-13 01:06:06.960 [INFO][4840] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.193/26] IPv6=[] ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" HandleID="k8s-pod-network.abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.007506 containerd[1818]: 2024-12-13 01:06:06.966 [INFO][4817] cni-plugin/k8s.go 386: Populated endpoint ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b105b0cb-657e-4fdc-9682-5e2264dac1c4", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"", Pod:"coredns-76f75df574-j27h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c62241164c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:07.007506 containerd[1818]: 2024-12-13 01:06:06.966 [INFO][4817] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.193/32] ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.007506 containerd[1818]: 2024-12-13 01:06:06.966 [INFO][4817] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1c62241164c ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.007506 containerd[1818]: 2024-12-13 01:06:06.978 [INFO][4817] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.007506 containerd[1818]: 2024-12-13 01:06:06.980 [INFO][4817] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b105b0cb-657e-4fdc-9682-5e2264dac1c4", ResourceVersion:"756", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a", Pod:"coredns-76f75df574-j27h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c62241164c", MAC:"32:08:0a:f5:c6:30", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:07.007506 containerd[1818]: 2024-12-13 01:06:07.003 [INFO][4817] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a" Namespace="kube-system" Pod="coredns-76f75df574-j27h4" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:07.020430 systemd-networkd[1391]: cali3fa5ae36bcc: Link UP Dec 13 01:06:07.020647 systemd-networkd[1391]: cali3fa5ae36bcc: Gained carrier Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.858 [INFO][4827] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0 calico-kube-controllers-57f84b8987- calico-system 650bc620-b65e-43c0-b8d1-820767c4d25d 757 0 2024-12-13 01:05:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:57f84b8987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.1-a-672c6884da calico-kube-controllers-57f84b8987-99kng eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali3fa5ae36bcc [] []}} ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.858 [INFO][4827] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.919 [INFO][4844] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" HandleID="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.936 [INFO][4844] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" HandleID="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292d60), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-672c6884da", "pod":"calico-kube-controllers-57f84b8987-99kng", "timestamp":"2024-12-13 01:06:06.919275009 +0000 UTC"}, Hostname:"ci-4081.2.1-a-672c6884da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.937 [INFO][4844] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.959 [INFO][4844] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.959 [INFO][4844] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-672c6884da' Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.962 [INFO][4844] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.966 [INFO][4844] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.972 [INFO][4844] ipam/ipam.go 489: Trying affinity for 192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.974 [INFO][4844] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.977 [INFO][4844] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.977 [INFO][4844] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.979 [INFO][4844] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124 Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:06.991 [INFO][4844] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:07.013 [INFO][4844] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.194/26] block=192.168.43.192/26 handle="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:07.013 [INFO][4844] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.194/26] handle="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:07.014 [INFO][4844] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:07.057248 containerd[1818]: 2024-12-13 01:06:07.014 [INFO][4844] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.194/26] IPv6=[] ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" HandleID="k8s-pod-network.6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.060137 containerd[1818]: 2024-12-13 01:06:07.016 [INFO][4827] cni-plugin/k8s.go 386: Populated endpoint ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0", GenerateName:"calico-kube-controllers-57f84b8987-", Namespace:"calico-system", SelfLink:"", UID:"650bc620-b65e-43c0-b8d1-820767c4d25d", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f84b8987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"", Pod:"calico-kube-controllers-57f84b8987-99kng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fa5ae36bcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:07.060137 containerd[1818]: 2024-12-13 01:06:07.016 [INFO][4827] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.194/32] ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.060137 containerd[1818]: 2024-12-13 01:06:07.017 [INFO][4827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3fa5ae36bcc ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.060137 containerd[1818]: 2024-12-13 01:06:07.020 [INFO][4827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.060137 containerd[1818]: 2024-12-13 01:06:07.020 [INFO][4827] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0", GenerateName:"calico-kube-controllers-57f84b8987-", Namespace:"calico-system", SelfLink:"", UID:"650bc620-b65e-43c0-b8d1-820767c4d25d", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f84b8987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124", Pod:"calico-kube-controllers-57f84b8987-99kng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fa5ae36bcc", MAC:"de:49:7f:b8:64:7d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:07.060137 containerd[1818]: 2024-12-13 01:06:07.051 [INFO][4827] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124" Namespace="calico-system" Pod="calico-kube-controllers-57f84b8987-99kng" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:07.101431 containerd[1818]: time="2024-12-13T01:06:07.098736921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:07.101727 containerd[1818]: time="2024-12-13T01:06:07.101162272Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:07.101727 containerd[1818]: time="2024-12-13T01:06:07.101232174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.101727 containerd[1818]: time="2024-12-13T01:06:07.101360476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.135239 containerd[1818]: time="2024-12-13T01:06:07.134912189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:07.135239 containerd[1818]: time="2024-12-13T01:06:07.134996591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:07.135239 containerd[1818]: time="2024-12-13T01:06:07.135143894Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.135673 containerd[1818]: time="2024-12-13T01:06:07.135350398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:07.204871 containerd[1818]: time="2024-12-13T01:06:07.204813774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-j27h4,Uid:b105b0cb-657e-4fdc-9682-5e2264dac1c4,Namespace:kube-system,Attempt:1,} returns sandbox id \"abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a\"" Dec 13 01:06:07.210897 containerd[1818]: time="2024-12-13T01:06:07.210748700Z" level=info msg="CreateContainer within sandbox \"abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:06:07.236992 containerd[1818]: time="2024-12-13T01:06:07.236816653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-57f84b8987-99kng,Uid:650bc620-b65e-43c0-b8d1-820767c4d25d,Namespace:calico-system,Attempt:1,} returns sandbox id \"6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124\"" Dec 13 01:06:07.241575 containerd[1818]: time="2024-12-13T01:06:07.240892440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Dec 13 01:06:07.270072 containerd[1818]: time="2024-12-13T01:06:07.270004158Z" level=info msg="CreateContainer within sandbox \"abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7614c4437c9650f243d6633e01d2178cbe04a0335b3abd04170b3038a5f22095\"" Dec 13 01:06:07.271019 containerd[1818]: time="2024-12-13T01:06:07.270974479Z" level=info msg="StartContainer for \"7614c4437c9650f243d6633e01d2178cbe04a0335b3abd04170b3038a5f22095\"" Dec 13 01:06:07.334900 containerd[1818]: time="2024-12-13T01:06:07.334799635Z" level=info msg="StartContainer for \"7614c4437c9650f243d6633e01d2178cbe04a0335b3abd04170b3038a5f22095\" returns successfully" Dec 13 01:06:07.408551 systemd-networkd[1391]: vxlan.calico: Gained IPv6LL Dec 13 01:06:07.585125 containerd[1818]: time="2024-12-13T01:06:07.584465637Z" level=info msg="StopPodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\"" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.644 [INFO][5018] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.645 [INFO][5018] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" iface="eth0" netns="/var/run/netns/cni-76f347ee-82ad-3a6d-b290-14917e319f47" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.645 [INFO][5018] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" iface="eth0" netns="/var/run/netns/cni-76f347ee-82ad-3a6d-b290-14917e319f47" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.645 [INFO][5018] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" iface="eth0" netns="/var/run/netns/cni-76f347ee-82ad-3a6d-b290-14917e319f47" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.645 [INFO][5018] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.645 [INFO][5018] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.669 [INFO][5024] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.669 [INFO][5024] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.669 [INFO][5024] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.678 [WARNING][5024] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.678 [INFO][5024] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.680 [INFO][5024] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:07.682988 containerd[1818]: 2024-12-13 01:06:07.681 [INFO][5018] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:07.683889 containerd[1818]: time="2024-12-13T01:06:07.683168034Z" level=info msg="TearDown network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" successfully" Dec 13 01:06:07.683889 containerd[1818]: time="2024-12-13T01:06:07.683315737Z" level=info msg="StopPodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" returns successfully" Dec 13 01:06:07.684589 containerd[1818]: time="2024-12-13T01:06:07.684550763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-5ghj6,Uid:9c1afa73-315a-455e-b964-9dea2e760170,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:06:07.738182 systemd[1]: run-netns-cni\x2d76f347ee\x2d82ad\x2d3a6d\x2db290\x2d14917e319f47.mount: Deactivated successfully. Dec 13 01:06:07.868892 systemd-networkd[1391]: cali827c1b89b5f: Link UP Dec 13 01:06:07.872596 systemd-networkd[1391]: cali827c1b89b5f: Gained carrier Dec 13 01:06:07.888233 kubelet[3489]: I1213 01:06:07.884878 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-j27h4" podStartSLOduration=39.884398208 podStartE2EDuration="39.884398208s" podCreationTimestamp="2024-12-13 01:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:06:07.883527089 +0000 UTC m=+52.424511389" watchObservedRunningTime="2024-12-13 01:06:07.884398208 +0000 UTC m=+52.425382808" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.769 [INFO][5031] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0 calico-apiserver-85d899fc85- calico-apiserver 9c1afa73-315a-455e-b964-9dea2e760170 769 0 2024-12-13 01:05:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d899fc85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-672c6884da calico-apiserver-85d899fc85-5ghj6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali827c1b89b5f [] []}} ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.769 [INFO][5031] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.802 [INFO][5041] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" HandleID="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.811 [INFO][5041] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" HandleID="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004d2b60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-672c6884da", "pod":"calico-apiserver-85d899fc85-5ghj6", "timestamp":"2024-12-13 01:06:07.802236163 +0000 UTC"}, Hostname:"ci-4081.2.1-a-672c6884da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.812 [INFO][5041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.812 [INFO][5041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.812 [INFO][5041] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-672c6884da' Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.814 [INFO][5041] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.819 [INFO][5041] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.828 [INFO][5041] ipam/ipam.go 489: Trying affinity for 192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.830 [INFO][5041] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.833 [INFO][5041] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.833 [INFO][5041] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.835 [INFO][5041] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.843 [INFO][5041] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.854 [INFO][5041] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.195/26] block=192.168.43.192/26 handle="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.855 [INFO][5041] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.195/26] handle="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.855 [INFO][5041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:07.915916 containerd[1818]: 2024-12-13 01:06:07.855 [INFO][5041] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.195/26] IPv6=[] ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" HandleID="k8s-pod-network.aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.919116 containerd[1818]: 2024-12-13 01:06:07.858 [INFO][5031] cni-plugin/k8s.go 386: Populated endpoint ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c1afa73-315a-455e-b964-9dea2e760170", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"", Pod:"calico-apiserver-85d899fc85-5ghj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali827c1b89b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:07.919116 containerd[1818]: 2024-12-13 01:06:07.859 [INFO][5031] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.195/32] ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.919116 containerd[1818]: 2024-12-13 01:06:07.859 [INFO][5031] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali827c1b89b5f ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.919116 containerd[1818]: 2024-12-13 01:06:07.872 [INFO][5031] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:07.919116 containerd[1818]: 2024-12-13 01:06:07.873 [INFO][5031] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c1afa73-315a-455e-b964-9dea2e760170", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd", Pod:"calico-apiserver-85d899fc85-5ghj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali827c1b89b5f", MAC:"e2:19:95:82:dc:73", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:07.919116 containerd[1818]: 2024-12-13 01:06:07.905 [INFO][5031] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-5ghj6" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:08.002034 containerd[1818]: time="2024-12-13T01:06:08.001901504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:08.002034 containerd[1818]: time="2024-12-13T01:06:08.001996006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:08.003140 containerd[1818]: time="2024-12-13T01:06:08.003036528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:08.003685 containerd[1818]: time="2024-12-13T01:06:08.003486937Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:08.075994 containerd[1818]: time="2024-12-13T01:06:08.075948876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-5ghj6,Uid:9c1afa73-315a-455e-b964-9dea2e760170,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd\"" Dec 13 01:06:08.586733 containerd[1818]: time="2024-12-13T01:06:08.586346214Z" level=info msg="StopPodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\"" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.658 [INFO][5119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.658 [INFO][5119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" iface="eth0" netns="/var/run/netns/cni-90f23c42-1191-a836-42c3-9ee50ea06b75" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.659 [INFO][5119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" iface="eth0" netns="/var/run/netns/cni-90f23c42-1191-a836-42c3-9ee50ea06b75" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.659 [INFO][5119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" iface="eth0" netns="/var/run/netns/cni-90f23c42-1191-a836-42c3-9ee50ea06b75" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.659 [INFO][5119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.659 [INFO][5119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.700 [INFO][5126] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.701 [INFO][5126] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.701 [INFO][5126] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.706 [WARNING][5126] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.706 [INFO][5126] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.708 [INFO][5126] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:08.711006 containerd[1818]: 2024-12-13 01:06:08.709 [INFO][5119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:08.711006 containerd[1818]: time="2024-12-13T01:06:08.710902604Z" level=info msg="TearDown network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" successfully" Dec 13 01:06:08.711006 containerd[1818]: time="2024-12-13T01:06:08.710955705Z" level=info msg="StopPodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" returns successfully" Dec 13 01:06:08.713459 containerd[1818]: time="2024-12-13T01:06:08.712795345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4p62t,Uid:67aac1e3-277c-4ca2-9d09-e223acfdf7de,Namespace:kube-system,Attempt:1,}" Dec 13 01:06:08.717016 systemd[1]: run-netns-cni\x2d90f23c42\x2d1191\x2da836\x2d42c3\x2d9ee50ea06b75.mount: Deactivated successfully. Dec 13 01:06:08.820451 systemd-networkd[1391]: cali3fa5ae36bcc: Gained IPv6LL Dec 13 01:06:08.944607 systemd-networkd[1391]: cali1c62241164c: Gained IPv6LL Dec 13 01:06:09.004505 systemd-networkd[1391]: cali9013669a477: Link UP Dec 13 01:06:09.004804 systemd-networkd[1391]: cali9013669a477: Gained carrier Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.819 [INFO][5133] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0 coredns-76f75df574- kube-system 67aac1e3-277c-4ca2-9d09-e223acfdf7de 786 0 2024-12-13 01:05:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.1-a-672c6884da coredns-76f75df574-4p62t eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali9013669a477 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.819 [INFO][5133] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.897 [INFO][5144] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" HandleID="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.924 [INFO][5144] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" HandleID="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051750), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.1-a-672c6884da", "pod":"coredns-76f75df574-4p62t", "timestamp":"2024-12-13 01:06:08.897191428 +0000 UTC"}, Hostname:"ci-4081.2.1-a-672c6884da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.924 [INFO][5144] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.924 [INFO][5144] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.924 [INFO][5144] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-672c6884da' Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.928 [INFO][5144] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.934 [INFO][5144] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.943 [INFO][5144] ipam/ipam.go 489: Trying affinity for 192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.949 [INFO][5144] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.952 [INFO][5144] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.952 [INFO][5144] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.954 [INFO][5144] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75 Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.965 [INFO][5144] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.985 [INFO][5144] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.196/26] block=192.168.43.192/26 handle="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.986 [INFO][5144] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.196/26] handle="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.986 [INFO][5144] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:09.047988 containerd[1818]: 2024-12-13 01:06:08.986 [INFO][5144] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.196/26] IPv6=[] ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" HandleID="k8s-pod-network.b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.049157 containerd[1818]: 2024-12-13 01:06:08.992 [INFO][5133] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"67aac1e3-277c-4ca2-9d09-e223acfdf7de", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"", Pod:"coredns-76f75df574-4p62t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9013669a477", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:09.049157 containerd[1818]: 2024-12-13 01:06:08.992 [INFO][5133] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.196/32] ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.049157 containerd[1818]: 2024-12-13 01:06:08.992 [INFO][5133] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9013669a477 ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.049157 containerd[1818]: 2024-12-13 01:06:09.003 [INFO][5133] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.049157 containerd[1818]: 2024-12-13 01:06:09.008 [INFO][5133] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"67aac1e3-277c-4ca2-9d09-e223acfdf7de", ResourceVersion:"786", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75", Pod:"coredns-76f75df574-4p62t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9013669a477", MAC:"ea:46:ec:76:59:18", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:09.049157 containerd[1818]: 2024-12-13 01:06:09.033 [INFO][5133] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75" Namespace="kube-system" Pod="coredns-76f75df574-4p62t" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:09.073602 systemd-networkd[1391]: cali827c1b89b5f: Gained IPv6LL Dec 13 01:06:09.097933 containerd[1818]: time="2024-12-13T01:06:09.097799561Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:09.098143 containerd[1818]: time="2024-12-13T01:06:09.097914364Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:09.098143 containerd[1818]: time="2024-12-13T01:06:09.097946564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:09.098295 containerd[1818]: time="2024-12-13T01:06:09.098162569Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:09.133447 systemd[1]: run-containerd-runc-k8s.io-b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75-runc.B8BJhb.mount: Deactivated successfully. Dec 13 01:06:09.209768 containerd[1818]: time="2024-12-13T01:06:09.209601076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-4p62t,Uid:67aac1e3-277c-4ca2-9d09-e223acfdf7de,Namespace:kube-system,Attempt:1,} returns sandbox id \"b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75\"" Dec 13 01:06:09.217361 containerd[1818]: time="2024-12-13T01:06:09.217143939Z" level=info msg="CreateContainer within sandbox \"b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 01:06:09.382557 containerd[1818]: time="2024-12-13T01:06:09.382461110Z" level=info msg="CreateContainer within sandbox \"b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"75e0bf6831630a98a80e4d8be9c7aa02503615bbf46add02e6e211d3ba9a304a\"" Dec 13 01:06:09.385373 containerd[1818]: time="2024-12-13T01:06:09.384472353Z" level=info msg="StartContainer for \"75e0bf6831630a98a80e4d8be9c7aa02503615bbf46add02e6e211d3ba9a304a\"" Dec 13 01:06:09.517716 containerd[1818]: time="2024-12-13T01:06:09.517653530Z" level=info msg="StartContainer for \"75e0bf6831630a98a80e4d8be9c7aa02503615bbf46add02e6e211d3ba9a304a\" returns successfully" Dec 13 01:06:09.900769 kubelet[3489]: I1213 01:06:09.900414 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-4p62t" podStartSLOduration=41.900295495 podStartE2EDuration="41.900295495s" podCreationTimestamp="2024-12-13 01:05:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 01:06:09.898393354 +0000 UTC m=+54.439377654" watchObservedRunningTime="2024-12-13 01:06:09.900295495 +0000 UTC m=+54.441279895" Dec 13 01:06:10.223923 containerd[1818]: time="2024-12-13T01:06:10.223360474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:10.226195 containerd[1818]: time="2024-12-13T01:06:10.226010131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Dec 13 01:06:10.231193 containerd[1818]: time="2024-12-13T01:06:10.229950316Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:10.236064 containerd[1818]: time="2024-12-13T01:06:10.235982246Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:10.237153 containerd[1818]: time="2024-12-13T01:06:10.236648561Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.995699519s" Dec 13 01:06:10.237153 containerd[1818]: time="2024-12-13T01:06:10.236699162Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Dec 13 01:06:10.238034 containerd[1818]: time="2024-12-13T01:06:10.238005390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:06:10.259526 containerd[1818]: time="2024-12-13T01:06:10.259471554Z" level=info msg="CreateContainer within sandbox \"6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Dec 13 01:06:10.314144 containerd[1818]: time="2024-12-13T01:06:10.314095034Z" level=info msg="CreateContainer within sandbox \"6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3\"" Dec 13 01:06:10.316638 containerd[1818]: time="2024-12-13T01:06:10.316133078Z" level=info msg="StartContainer for \"f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3\"" Dec 13 01:06:10.426118 containerd[1818]: time="2024-12-13T01:06:10.426057552Z" level=info msg="StartContainer for \"f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3\" returns successfully" Dec 13 01:06:10.585045 containerd[1818]: time="2024-12-13T01:06:10.584161567Z" level=info msg="StopPodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\"" Dec 13 01:06:10.585045 containerd[1818]: time="2024-12-13T01:06:10.584757080Z" level=info msg="StopPodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\"" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.670 [INFO][5309] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.672 [INFO][5309] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" iface="eth0" netns="/var/run/netns/cni-e7d1819b-1787-ad1b-ade1-46b8320a3726" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.672 [INFO][5309] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" iface="eth0" netns="/var/run/netns/cni-e7d1819b-1787-ad1b-ade1-46b8320a3726" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.672 [INFO][5309] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" iface="eth0" netns="/var/run/netns/cni-e7d1819b-1787-ad1b-ade1-46b8320a3726" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.672 [INFO][5309] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.673 [INFO][5309] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.721 [INFO][5324] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.722 [INFO][5324] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.722 [INFO][5324] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.728 [WARNING][5324] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.728 [INFO][5324] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.730 [INFO][5324] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:10.734831 containerd[1818]: 2024-12-13 01:06:10.733 [INFO][5309] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:10.735569 containerd[1818]: time="2024-12-13T01:06:10.735363833Z" level=info msg="TearDown network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" successfully" Dec 13 01:06:10.735569 containerd[1818]: time="2024-12-13T01:06:10.735412134Z" level=info msg="StopPodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" returns successfully" Dec 13 01:06:10.736980 containerd[1818]: time="2024-12-13T01:06:10.736939667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6cd5,Uid:52cded57-51a5-4d1e-9829-02d4ac1d0d2d,Namespace:calico-system,Attempt:1,}" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.694 [INFO][5316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.694 [INFO][5316] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" iface="eth0" netns="/var/run/netns/cni-ed733f0b-1ac2-3392-11bf-15ca80543289" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.694 [INFO][5316] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" iface="eth0" netns="/var/run/netns/cni-ed733f0b-1ac2-3392-11bf-15ca80543289" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.696 [INFO][5316] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" iface="eth0" netns="/var/run/netns/cni-ed733f0b-1ac2-3392-11bf-15ca80543289" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.697 [INFO][5316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.697 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.742 [INFO][5329] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.742 [INFO][5329] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.742 [INFO][5329] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.750 [WARNING][5329] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.750 [INFO][5329] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.754 [INFO][5329] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:10.780241 containerd[1818]: 2024-12-13 01:06:10.764 [INFO][5316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:10.781255 systemd[1]: run-netns-cni\x2de7d1819b\x2d1787\x2dad1b\x2dade1\x2d46b8320a3726.mount: Deactivated successfully. Dec 13 01:06:10.785103 containerd[1818]: time="2024-12-13T01:06:10.784847702Z" level=info msg="TearDown network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" successfully" Dec 13 01:06:10.785103 containerd[1818]: time="2024-12-13T01:06:10.784898203Z" level=info msg="StopPodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" returns successfully" Dec 13 01:06:10.794687 containerd[1818]: time="2024-12-13T01:06:10.794634313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-hbn4w,Uid:356134e0-757a-4c9e-82e3-a756fd989077,Namespace:calico-apiserver,Attempt:1,}" Dec 13 01:06:10.805564 systemd[1]: run-netns-cni\x2ded733f0b\x2d1ac2\x2d3392\x2d11bf\x2d15ca80543289.mount: Deactivated successfully. Dec 13 01:06:10.928468 systemd-networkd[1391]: cali9013669a477: Gained IPv6LL Dec 13 01:06:11.082704 kubelet[3489]: I1213 01:06:11.082201 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-57f84b8987-99kng" podStartSLOduration=30.085401282 podStartE2EDuration="33.082013621s" podCreationTimestamp="2024-12-13 01:05:38 +0000 UTC" firstStartedPulling="2024-12-13 01:06:07.240574333 +0000 UTC m=+51.781558733" lastFinishedPulling="2024-12-13 01:06:10.237186772 +0000 UTC m=+54.778171072" observedRunningTime="2024-12-13 01:06:10.939949952 +0000 UTC m=+55.480934352" watchObservedRunningTime="2024-12-13 01:06:11.082013621 +0000 UTC m=+55.622997921" Dec 13 01:06:11.234451 systemd-networkd[1391]: calic290ec6512d: Link UP Dec 13 01:06:11.239640 systemd-networkd[1391]: calic290ec6512d: Gained carrier Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.028 [INFO][5348] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0 calico-apiserver-85d899fc85- calico-apiserver 356134e0-757a-4c9e-82e3-a756fd989077 812 0 2024-12-13 01:05:38 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:85d899fc85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.1-a-672c6884da calico-apiserver-85d899fc85-hbn4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic290ec6512d [] []}} ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.028 [INFO][5348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.145 [INFO][5375] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" HandleID="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.167 [INFO][5375] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" HandleID="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edce0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.1-a-672c6884da", "pod":"calico-apiserver-85d899fc85-hbn4w", "timestamp":"2024-12-13 01:06:11.142646831 +0000 UTC"}, Hostname:"ci-4081.2.1-a-672c6884da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.167 [INFO][5375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.167 [INFO][5375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.167 [INFO][5375] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-672c6884da' Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.172 [INFO][5375] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.179 [INFO][5375] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.184 [INFO][5375] ipam/ipam.go 489: Trying affinity for 192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.189 [INFO][5375] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.194 [INFO][5375] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.195 [INFO][5375] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.196 [INFO][5375] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209 Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.203 [INFO][5375] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.214 [INFO][5375] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.197/26] block=192.168.43.192/26 handle="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.215 [INFO][5375] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.197/26] handle="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.215 [INFO][5375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:11.276689 containerd[1818]: 2024-12-13 01:06:11.215 [INFO][5375] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.197/26] IPv6=[] ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" HandleID="k8s-pod-network.f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.283825 containerd[1818]: 2024-12-13 01:06:11.219 [INFO][5348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"356134e0-757a-4c9e-82e3-a756fd989077", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"", Pod:"calico-apiserver-85d899fc85-hbn4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic290ec6512d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:11.283825 containerd[1818]: 2024-12-13 01:06:11.220 [INFO][5348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.197/32] ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.283825 containerd[1818]: 2024-12-13 01:06:11.220 [INFO][5348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic290ec6512d ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.283825 containerd[1818]: 2024-12-13 01:06:11.238 [INFO][5348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.283825 containerd[1818]: 2024-12-13 01:06:11.242 [INFO][5348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"356134e0-757a-4c9e-82e3-a756fd989077", ResourceVersion:"812", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209", Pod:"calico-apiserver-85d899fc85-hbn4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic290ec6512d", MAC:"6e:37:22:6b:23:7f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:11.283825 containerd[1818]: 2024-12-13 01:06:11.270 [INFO][5348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209" Namespace="calico-apiserver" Pod="calico-apiserver-85d899fc85-hbn4w" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:11.324023 systemd-networkd[1391]: cali1a36f49d41a: Link UP Dec 13 01:06:11.337558 systemd-networkd[1391]: cali1a36f49d41a: Gained carrier Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.026 [INFO][5336] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0 csi-node-driver- calico-system 52cded57-51a5-4d1e-9829-02d4ac1d0d2d 811 0 2024-12-13 01:05:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.1-a-672c6884da csi-node-driver-z6cd5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1a36f49d41a [] []}} ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.027 [INFO][5336] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.157 [INFO][5382] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" HandleID="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.172 [INFO][5382] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" HandleID="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319030), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.1-a-672c6884da", "pod":"csi-node-driver-z6cd5", "timestamp":"2024-12-13 01:06:11.15741785 +0000 UTC"}, Hostname:"ci-4081.2.1-a-672c6884da", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.172 [INFO][5382] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.215 [INFO][5382] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.216 [INFO][5382] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.1-a-672c6884da' Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.224 [INFO][5382] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.243 [INFO][5382] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.257 [INFO][5382] ipam/ipam.go 489: Trying affinity for 192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.260 [INFO][5382] ipam/ipam.go 155: Attempting to load block cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.273 [INFO][5382] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.43.192/26 host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.275 [INFO][5382] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.43.192/26 handle="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.280 [INFO][5382] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9 Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.300 [INFO][5382] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.43.192/26 handle="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.317 [INFO][5382] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.43.198/26] block=192.168.43.192/26 handle="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.317 [INFO][5382] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.43.198/26] handle="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" host="ci-4081.2.1-a-672c6884da" Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.317 [INFO][5382] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:11.371025 containerd[1818]: 2024-12-13 01:06:11.317 [INFO][5382] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.43.198/26] IPv6=[] ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" HandleID="k8s-pod-network.feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.372015 containerd[1818]: 2024-12-13 01:06:11.320 [INFO][5336] cni-plugin/k8s.go 386: Populated endpoint ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52cded57-51a5-4d1e-9829-02d4ac1d0d2d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"", Pod:"csi-node-driver-z6cd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a36f49d41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:11.372015 containerd[1818]: 2024-12-13 01:06:11.320 [INFO][5336] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.43.198/32] ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.372015 containerd[1818]: 2024-12-13 01:06:11.320 [INFO][5336] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1a36f49d41a ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.372015 containerd[1818]: 2024-12-13 01:06:11.326 [INFO][5336] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.372015 containerd[1818]: 2024-12-13 01:06:11.327 [INFO][5336] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52cded57-51a5-4d1e-9829-02d4ac1d0d2d", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9", Pod:"csi-node-driver-z6cd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a36f49d41a", MAC:"f6:1e:3d:0b:0f:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:11.372015 containerd[1818]: 2024-12-13 01:06:11.355 [INFO][5336] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9" Namespace="calico-system" Pod="csi-node-driver-z6cd5" WorkloadEndpoint="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:11.394228 containerd[1818]: time="2024-12-13T01:06:11.393650452Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:11.394228 containerd[1818]: time="2024-12-13T01:06:11.393827456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:11.394228 containerd[1818]: time="2024-12-13T01:06:11.393873257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:11.394228 containerd[1818]: time="2024-12-13T01:06:11.394014360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:11.428118 containerd[1818]: time="2024-12-13T01:06:11.427999394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 01:06:11.428118 containerd[1818]: time="2024-12-13T01:06:11.428059496Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 01:06:11.428464 containerd[1818]: time="2024-12-13T01:06:11.428076796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:11.428918 containerd[1818]: time="2024-12-13T01:06:11.428201299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 01:06:11.493964 containerd[1818]: time="2024-12-13T01:06:11.493696713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-85d899fc85-hbn4w,Uid:356134e0-757a-4c9e-82e3-a756fd989077,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209\"" Dec 13 01:06:11.501491 containerd[1818]: time="2024-12-13T01:06:11.501080073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-z6cd5,Uid:52cded57-51a5-4d1e-9829-02d4ac1d0d2d,Namespace:calico-system,Attempt:1,} returns sandbox id \"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9\"" Dec 13 01:06:12.464566 systemd-networkd[1391]: calic290ec6512d: Gained IPv6LL Dec 13 01:06:12.656436 systemd-networkd[1391]: cali1a36f49d41a: Gained IPv6LL Dec 13 01:06:15.596576 containerd[1818]: time="2024-12-13T01:06:15.596511935Z" level=info msg="StopPodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\"" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.639 [WARNING][5524] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b105b0cb-657e-4fdc-9682-5e2264dac1c4", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a", Pod:"coredns-76f75df574-j27h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c62241164c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.639 [INFO][5524] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.639 [INFO][5524] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" iface="eth0" netns="" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.639 [INFO][5524] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.640 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.664 [INFO][5530] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.665 [INFO][5530] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.665 [INFO][5530] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.673 [WARNING][5530] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.673 [INFO][5530] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.675 [INFO][5530] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:15.682107 containerd[1818]: 2024-12-13 01:06:15.679 [INFO][5524] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.682107 containerd[1818]: time="2024-12-13T01:06:15.681099763Z" level=info msg="TearDown network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" successfully" Dec 13 01:06:15.682107 containerd[1818]: time="2024-12-13T01:06:15.681162564Z" level=info msg="StopPodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" returns successfully" Dec 13 01:06:15.684186 containerd[1818]: time="2024-12-13T01:06:15.683680718Z" level=info msg="RemovePodSandbox for \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\"" Dec 13 01:06:15.684186 containerd[1818]: time="2024-12-13T01:06:15.683744320Z" level=info msg="Forcibly stopping sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\"" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.729 [WARNING][5548] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"b105b0cb-657e-4fdc-9682-5e2264dac1c4", ResourceVersion:"777", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"abf1ec10504a9cb3a03b3ac6f7d87a71749d226f61231f1cbea7f5c3b4cd183a", Pod:"coredns-76f75df574-j27h4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali1c62241164c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.729 [INFO][5548] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.729 [INFO][5548] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" iface="eth0" netns="" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.729 [INFO][5548] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.729 [INFO][5548] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.756 [INFO][5554] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.756 [INFO][5554] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.757 [INFO][5554] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.762 [WARNING][5554] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.762 [INFO][5554] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" HandleID="k8s-pod-network.7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--j27h4-eth0" Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.764 [INFO][5554] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:15.766567 containerd[1818]: 2024-12-13 01:06:15.765 [INFO][5548] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d" Dec 13 01:06:15.767417 containerd[1818]: time="2024-12-13T01:06:15.766625710Z" level=info msg="TearDown network for sandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" successfully" Dec 13 01:06:15.778451 containerd[1818]: time="2024-12-13T01:06:15.778138759Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:06:15.778451 containerd[1818]: time="2024-12-13T01:06:15.778276662Z" level=info msg="RemovePodSandbox \"7cb684b7d69aaf93c6e29ab208e324fb5f9ca5f316ffd90192eed40c25512a6d\" returns successfully" Dec 13 01:06:15.779137 containerd[1818]: time="2024-12-13T01:06:15.779098679Z" level=info msg="StopPodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\"" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.840 [WARNING][5573] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"67aac1e3-277c-4ca2-9d09-e223acfdf7de", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75", Pod:"coredns-76f75df574-4p62t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9013669a477", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.840 [INFO][5573] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.840 [INFO][5573] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" iface="eth0" netns="" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.840 [INFO][5573] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.840 [INFO][5573] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.871 [INFO][5579] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.872 [INFO][5579] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.872 [INFO][5579] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.877 [WARNING][5579] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.877 [INFO][5579] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.879 [INFO][5579] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:15.881837 containerd[1818]: 2024-12-13 01:06:15.880 [INFO][5573] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:15.881837 containerd[1818]: time="2024-12-13T01:06:15.881792098Z" level=info msg="TearDown network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" successfully" Dec 13 01:06:15.881837 containerd[1818]: time="2024-12-13T01:06:15.881832198Z" level=info msg="StopPodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" returns successfully" Dec 13 01:06:15.883875 containerd[1818]: time="2024-12-13T01:06:15.883835542Z" level=info msg="RemovePodSandbox for \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\"" Dec 13 01:06:15.884003 containerd[1818]: time="2024-12-13T01:06:15.883888743Z" level=info msg="Forcibly stopping sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\"" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.929 [WARNING][5597] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"67aac1e3-277c-4ca2-9d09-e223acfdf7de", ResourceVersion:"797", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"b1a271122a84fc908a1e178a8081b3f20b4fdd03e9d3b2cf6b7ea3b67a017a75", Pod:"coredns-76f75df574-4p62t", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.43.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali9013669a477", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.929 [INFO][5597] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.929 [INFO][5597] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" iface="eth0" netns="" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.929 [INFO][5597] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.929 [INFO][5597] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.987 [INFO][5603] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.987 [INFO][5603] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.987 [INFO][5603] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.994 [WARNING][5603] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.994 [INFO][5603] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" HandleID="k8s-pod-network.d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Workload="ci--4081.2.1--a--672c6884da-k8s-coredns--76f75df574--4p62t-eth0" Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.997 [INFO][5603] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:16.002383 containerd[1818]: 2024-12-13 01:06:15.999 [INFO][5597] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d" Dec 13 01:06:16.003133 containerd[1818]: time="2024-12-13T01:06:16.002414403Z" level=info msg="TearDown network for sandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" successfully" Dec 13 01:06:16.010913 containerd[1818]: time="2024-12-13T01:06:16.010847785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:06:16.011184 containerd[1818]: time="2024-12-13T01:06:16.010936287Z" level=info msg="RemovePodSandbox \"d7956e5b4aa41252f2094ad774bb31851e7727a95c9a98e5e05b70bb0111c04d\" returns successfully" Dec 13 01:06:16.011919 containerd[1818]: time="2024-12-13T01:06:16.011542100Z" level=info msg="StopPodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\"" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.056 [WARNING][5622] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52cded57-51a5-4d1e-9829-02d4ac1d0d2d", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9", Pod:"csi-node-driver-z6cd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a36f49d41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.057 [INFO][5622] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.057 [INFO][5622] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" iface="eth0" netns="" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.057 [INFO][5622] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.057 [INFO][5622] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.088 [INFO][5628] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.088 [INFO][5628] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.088 [INFO][5628] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.093 [WARNING][5628] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.094 [INFO][5628] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.095 [INFO][5628] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:16.097525 containerd[1818]: 2024-12-13 01:06:16.096 [INFO][5622] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.098423 containerd[1818]: time="2024-12-13T01:06:16.097584359Z" level=info msg="TearDown network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" successfully" Dec 13 01:06:16.098423 containerd[1818]: time="2024-12-13T01:06:16.097622260Z" level=info msg="StopPodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" returns successfully" Dec 13 01:06:16.098423 containerd[1818]: time="2024-12-13T01:06:16.098407877Z" level=info msg="RemovePodSandbox for \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\"" Dec 13 01:06:16.098589 containerd[1818]: time="2024-12-13T01:06:16.098447477Z" level=info msg="Forcibly stopping sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\"" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.148 [WARNING][5649] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"52cded57-51a5-4d1e-9829-02d4ac1d0d2d", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9", Pod:"csi-node-driver-z6cd5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.43.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1a36f49d41a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.149 [INFO][5649] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.149 [INFO][5649] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" iface="eth0" netns="" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.149 [INFO][5649] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.149 [INFO][5649] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.175 [INFO][5657] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.175 [INFO][5657] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.175 [INFO][5657] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.185 [WARNING][5657] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.185 [INFO][5657] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" HandleID="k8s-pod-network.6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Workload="ci--4081.2.1--a--672c6884da-k8s-csi--node--driver--z6cd5-eth0" Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.186 [INFO][5657] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:16.189317 containerd[1818]: 2024-12-13 01:06:16.187 [INFO][5649] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9" Dec 13 01:06:16.191245 containerd[1818]: time="2024-12-13T01:06:16.190388963Z" level=info msg="TearDown network for sandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" successfully" Dec 13 01:06:16.396487 containerd[1818]: time="2024-12-13T01:06:16.396415812Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:06:16.396739 containerd[1818]: time="2024-12-13T01:06:16.396529315Z" level=info msg="RemovePodSandbox \"6955ebddffc1753c61f9a34181736219e10dd4e248f0f1925e2613c6c75dd4e9\" returns successfully" Dec 13 01:06:16.399504 containerd[1818]: time="2024-12-13T01:06:16.399229573Z" level=info msg="StopPodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\"" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.492 [WARNING][5681] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c1afa73-315a-455e-b964-9dea2e760170", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd", Pod:"calico-apiserver-85d899fc85-5ghj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali827c1b89b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.493 [INFO][5681] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.493 [INFO][5681] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" iface="eth0" netns="" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.493 [INFO][5681] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.493 [INFO][5681] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.550 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.551 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.551 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.565 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.565 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.567 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:16.573291 containerd[1818]: 2024-12-13 01:06:16.569 [INFO][5681] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.575025 containerd[1818]: time="2024-12-13T01:06:16.573448833Z" level=info msg="TearDown network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" successfully" Dec 13 01:06:16.575025 containerd[1818]: time="2024-12-13T01:06:16.573492434Z" level=info msg="StopPodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" returns successfully" Dec 13 01:06:16.575025 containerd[1818]: time="2024-12-13T01:06:16.574514456Z" level=info msg="RemovePodSandbox for \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\"" Dec 13 01:06:16.575025 containerd[1818]: time="2024-12-13T01:06:16.574560757Z" level=info msg="Forcibly stopping sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\"" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.669 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"9c1afa73-315a-455e-b964-9dea2e760170", ResourceVersion:"776", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd", Pod:"calico-apiserver-85d899fc85-5ghj6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali827c1b89b5f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.669 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.669 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" iface="eth0" netns="" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.669 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.669 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.730 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.731 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.731 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.740 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.740 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" HandleID="k8s-pod-network.31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--5ghj6-eth0" Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.743 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:16.748487 containerd[1818]: 2024-12-13 01:06:16.745 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43" Dec 13 01:06:16.750771 containerd[1818]: time="2024-12-13T01:06:16.749617636Z" level=info msg="TearDown network for sandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" successfully" Dec 13 01:06:16.759075 containerd[1818]: time="2024-12-13T01:06:16.759019239Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:06:16.759238 containerd[1818]: time="2024-12-13T01:06:16.759108341Z" level=info msg="RemovePodSandbox \"31d2361f32a8a9c927b95f8ce472a355169fc740fe079834932ca1725eda6d43\" returns successfully" Dec 13 01:06:16.760176 containerd[1818]: time="2024-12-13T01:06:16.760139563Z" level=info msg="StopPodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\"" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.846 [WARNING][5731] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"356134e0-757a-4c9e-82e3-a756fd989077", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209", Pod:"calico-apiserver-85d899fc85-hbn4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic290ec6512d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.846 [INFO][5731] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.846 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" iface="eth0" netns="" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.846 [INFO][5731] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.846 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.895 [INFO][5737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.897 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.897 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.908 [WARNING][5737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.908 [INFO][5737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.910 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:16.918892 containerd[1818]: 2024-12-13 01:06:16.912 [INFO][5731] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:16.919877 containerd[1818]: time="2024-12-13T01:06:16.919316198Z" level=info msg="TearDown network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" successfully" Dec 13 01:06:16.919877 containerd[1818]: time="2024-12-13T01:06:16.919369099Z" level=info msg="StopPodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" returns successfully" Dec 13 01:06:16.921234 containerd[1818]: time="2024-12-13T01:06:16.921086737Z" level=info msg="RemovePodSandbox for \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\"" Dec 13 01:06:16.921234 containerd[1818]: time="2024-12-13T01:06:16.921137438Z" level=info msg="Forcibly stopping sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\"" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:16.996 [WARNING][5755] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0", GenerateName:"calico-apiserver-85d899fc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"356134e0-757a-4c9e-82e3-a756fd989077", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"85d899fc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209", Pod:"calico-apiserver-85d899fc85-hbn4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.43.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic290ec6512d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:16.996 [INFO][5755] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:16.996 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" iface="eth0" netns="" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:16.996 [INFO][5755] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:16.996 [INFO][5755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.041 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.042 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.042 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.057 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.058 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" HandleID="k8s-pod-network.5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--apiserver--85d899fc85--hbn4w-eth0" Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.061 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:17.064863 containerd[1818]: 2024-12-13 01:06:17.063 [INFO][5755] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561" Dec 13 01:06:17.065690 containerd[1818]: time="2024-12-13T01:06:17.064927041Z" level=info msg="TearDown network for sandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" successfully" Dec 13 01:06:17.075678 containerd[1818]: time="2024-12-13T01:06:17.075509169Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:06:17.075678 containerd[1818]: time="2024-12-13T01:06:17.075614972Z" level=info msg="RemovePodSandbox \"5a9e5bab6efb5e8ab1ac50dcd3c5df5c4957615057593d1c883efdaeb0bc2561\" returns successfully" Dec 13 01:06:17.078380 containerd[1818]: time="2024-12-13T01:06:17.078341331Z" level=info msg="StopPodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\"" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.153 [WARNING][5779] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0", GenerateName:"calico-kube-controllers-57f84b8987-", Namespace:"calico-system", SelfLink:"", UID:"650bc620-b65e-43c0-b8d1-820767c4d25d", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f84b8987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124", Pod:"calico-kube-controllers-57f84b8987-99kng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fa5ae36bcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.153 [INFO][5779] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.153 [INFO][5779] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" iface="eth0" netns="" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.153 [INFO][5779] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.153 [INFO][5779] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.195 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.196 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.196 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.205 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.205 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.208 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:17.214418 containerd[1818]: 2024-12-13 01:06:17.210 [INFO][5779] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.214418 containerd[1818]: time="2024-12-13T01:06:17.213453147Z" level=info msg="TearDown network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" successfully" Dec 13 01:06:17.214418 containerd[1818]: time="2024-12-13T01:06:17.213491648Z" level=info msg="StopPodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" returns successfully" Dec 13 01:06:17.215679 containerd[1818]: time="2024-12-13T01:06:17.214710174Z" level=info msg="RemovePodSandbox for \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\"" Dec 13 01:06:17.215679 containerd[1818]: time="2024-12-13T01:06:17.214750775Z" level=info msg="Forcibly stopping sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\"" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.282 [WARNING][5804] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0", GenerateName:"calico-kube-controllers-57f84b8987-", Namespace:"calico-system", SelfLink:"", UID:"650bc620-b65e-43c0-b8d1-820767c4d25d", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 1, 5, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"57f84b8987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.1-a-672c6884da", ContainerID:"6fe4eae0af38f246799edfead697807eaf795b860ec596e156a0be5f0be3d124", Pod:"calico-kube-controllers-57f84b8987-99kng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.43.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali3fa5ae36bcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.282 [INFO][5804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.282 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" iface="eth0" netns="" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.282 [INFO][5804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.282 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.338 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.338 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.339 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.349 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.349 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" HandleID="k8s-pod-network.dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Workload="ci--4081.2.1--a--672c6884da-k8s-calico--kube--controllers--57f84b8987--99kng-eth0" Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.351 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 01:06:17.355777 containerd[1818]: 2024-12-13 01:06:17.353 [INFO][5804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc" Dec 13 01:06:17.355777 containerd[1818]: time="2024-12-13T01:06:17.355586114Z" level=info msg="TearDown network for sandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" successfully" Dec 13 01:06:17.365172 containerd[1818]: time="2024-12-13T01:06:17.365118020Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 01:06:17.365792 containerd[1818]: time="2024-12-13T01:06:17.365230823Z" level=info msg="RemovePodSandbox \"dfcbd3e9e88f140bff1ed1e55a8c87ad2d95beabf9ecaed3923fda7bc225d1dc\" returns successfully" Dec 13 01:06:17.696548 containerd[1818]: time="2024-12-13T01:06:17.696466072Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:17.702725 containerd[1818]: time="2024-12-13T01:06:17.702516202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Dec 13 01:06:17.706760 containerd[1818]: time="2024-12-13T01:06:17.706709293Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:17.712004 containerd[1818]: time="2024-12-13T01:06:17.711922105Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:17.713079 containerd[1818]: time="2024-12-13T01:06:17.713022129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 7.474890236s" Dec 13 01:06:17.713079 containerd[1818]: time="2024-12-13T01:06:17.713074130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:06:17.714399 containerd[1818]: time="2024-12-13T01:06:17.714235155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Dec 13 01:06:17.716101 containerd[1818]: time="2024-12-13T01:06:17.716059895Z" level=info msg="CreateContainer within sandbox \"aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:06:17.746725 containerd[1818]: time="2024-12-13T01:06:17.746670255Z" level=info msg="CreateContainer within sandbox \"aea3f2c4518e74474cf560db1cc199427f0a6fc8503bc486ae004b7585ee8bdd\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"dcfe9ab4ab8b2510d3a4373f5b17a813b490a848f9fb0ea6a27269ae97e5800a\"" Dec 13 01:06:17.747582 containerd[1818]: time="2024-12-13T01:06:17.747545274Z" level=info msg="StartContainer for \"dcfe9ab4ab8b2510d3a4373f5b17a813b490a848f9fb0ea6a27269ae97e5800a\"" Dec 13 01:06:17.841544 containerd[1818]: time="2024-12-13T01:06:17.841474001Z" level=info msg="StartContainer for \"dcfe9ab4ab8b2510d3a4373f5b17a813b490a848f9fb0ea6a27269ae97e5800a\" returns successfully" Dec 13 01:06:18.025772 containerd[1818]: time="2024-12-13T01:06:18.024771558Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:18.028245 containerd[1818]: time="2024-12-13T01:06:18.028167931Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Dec 13 01:06:18.031366 containerd[1818]: time="2024-12-13T01:06:18.031150695Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 316.876939ms" Dec 13 01:06:18.031366 containerd[1818]: time="2024-12-13T01:06:18.031222597Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Dec 13 01:06:18.033229 containerd[1818]: time="2024-12-13T01:06:18.033105037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 01:06:18.034855 containerd[1818]: time="2024-12-13T01:06:18.034823075Z" level=info msg="CreateContainer within sandbox \"f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Dec 13 01:06:18.077220 containerd[1818]: time="2024-12-13T01:06:18.076656377Z" level=info msg="CreateContainer within sandbox \"f3e930b25a888358f1300164f3e865c448ff76dc1482b2b805cc4f52b74ae209\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"8464e5a461a0be4c4a66ff21dd9c839abe7e83c1ee13076df1966021fa0316ed\"" Dec 13 01:06:18.079581 containerd[1818]: time="2024-12-13T01:06:18.079383936Z" level=info msg="StartContainer for \"8464e5a461a0be4c4a66ff21dd9c839abe7e83c1ee13076df1966021fa0316ed\"" Dec 13 01:06:18.188924 containerd[1818]: time="2024-12-13T01:06:18.188862499Z" level=info msg="StartContainer for \"8464e5a461a0be4c4a66ff21dd9c839abe7e83c1ee13076df1966021fa0316ed\" returns successfully" Dec 13 01:06:18.742151 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1018032027.mount: Deactivated successfully. Dec 13 01:06:18.978099 kubelet[3489]: I1213 01:06:18.977604 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d899fc85-5ghj6" podStartSLOduration=31.34146659 podStartE2EDuration="40.977532021s" podCreationTimestamp="2024-12-13 01:05:38 +0000 UTC" firstStartedPulling="2024-12-13 01:06:08.077664313 +0000 UTC m=+52.618648613" lastFinishedPulling="2024-12-13 01:06:17.713729744 +0000 UTC m=+62.254714044" observedRunningTime="2024-12-13 01:06:17.964550158 +0000 UTC m=+62.505534558" watchObservedRunningTime="2024-12-13 01:06:18.977532021 +0000 UTC m=+63.518516421" Dec 13 01:06:19.116606 kubelet[3489]: I1213 01:06:19.116552 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-85d899fc85-hbn4w" podStartSLOduration=34.585793484 podStartE2EDuration="41.11648472s" podCreationTimestamp="2024-12-13 01:05:38 +0000 UTC" firstStartedPulling="2024-12-13 01:06:11.500992171 +0000 UTC m=+56.041976471" lastFinishedPulling="2024-12-13 01:06:18.031683407 +0000 UTC m=+62.572667707" observedRunningTime="2024-12-13 01:06:18.979975174 +0000 UTC m=+63.520959474" watchObservedRunningTime="2024-12-13 01:06:19.11648472 +0000 UTC m=+63.657469020" Dec 13 01:06:19.532096 containerd[1818]: time="2024-12-13T01:06:19.532030189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:19.534130 containerd[1818]: time="2024-12-13T01:06:19.534065933Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 01:06:19.538676 containerd[1818]: time="2024-12-13T01:06:19.538579930Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:19.545738 containerd[1818]: time="2024-12-13T01:06:19.545653683Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:19.547097 containerd[1818]: time="2024-12-13T01:06:19.546617504Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.513417865s" Dec 13 01:06:19.547097 containerd[1818]: time="2024-12-13T01:06:19.546670605Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 01:06:19.549202 containerd[1818]: time="2024-12-13T01:06:19.549163259Z" level=info msg="CreateContainer within sandbox \"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 01:06:19.582176 containerd[1818]: time="2024-12-13T01:06:19.582113970Z" level=info msg="CreateContainer within sandbox \"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"adad3a06662e272af932b845a864e7c8781ed9f901253c3003ba3cd9a06e6c16\"" Dec 13 01:06:19.586949 containerd[1818]: time="2024-12-13T01:06:19.585820550Z" level=info msg="StartContainer for \"adad3a06662e272af932b845a864e7c8781ed9f901253c3003ba3cd9a06e6c16\"" Dec 13 01:06:19.687308 containerd[1818]: time="2024-12-13T01:06:19.687247239Z" level=info msg="StartContainer for \"adad3a06662e272af932b845a864e7c8781ed9f901253c3003ba3cd9a06e6c16\" returns successfully" Dec 13 01:06:19.689807 containerd[1818]: time="2024-12-13T01:06:19.689756693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 01:06:19.736465 systemd[1]: run-containerd-runc-k8s.io-adad3a06662e272af932b845a864e7c8781ed9f901253c3003ba3cd9a06e6c16-runc.vjI9ig.mount: Deactivated successfully. Dec 13 01:06:21.288875 containerd[1818]: time="2024-12-13T01:06:21.288814906Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:21.292521 containerd[1818]: time="2024-12-13T01:06:21.292450985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 01:06:21.299353 containerd[1818]: time="2024-12-13T01:06:21.299276032Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:21.306516 containerd[1818]: time="2024-12-13T01:06:21.306442587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 01:06:21.307471 containerd[1818]: time="2024-12-13T01:06:21.307428108Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.617616614s" Dec 13 01:06:21.307584 containerd[1818]: time="2024-12-13T01:06:21.307472509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 01:06:21.310485 containerd[1818]: time="2024-12-13T01:06:21.309940162Z" level=info msg="CreateContainer within sandbox \"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 01:06:21.350627 containerd[1818]: time="2024-12-13T01:06:21.350560539Z" level=info msg="CreateContainer within sandbox \"feba9d455a03b3bcd97e201112f00923933f34926f03a5a645a4f291d2a455b9\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b54748f28425691377ab93b7c6fffff4d522e1a2d2a51f6f11248324b2bf7f0e\"" Dec 13 01:06:21.351914 containerd[1818]: time="2024-12-13T01:06:21.351763265Z" level=info msg="StartContainer for \"b54748f28425691377ab93b7c6fffff4d522e1a2d2a51f6f11248324b2bf7f0e\"" Dec 13 01:06:21.444146 containerd[1818]: time="2024-12-13T01:06:21.444015756Z" level=info msg="StartContainer for \"b54748f28425691377ab93b7c6fffff4d522e1a2d2a51f6f11248324b2bf7f0e\" returns successfully" Dec 13 01:06:21.712349 kubelet[3489]: I1213 01:06:21.712129 3489 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 01:06:21.712349 kubelet[3489]: I1213 01:06:21.712219 3489 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 01:06:21.989280 kubelet[3489]: I1213 01:06:21.989063 3489 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-z6cd5" podStartSLOduration=34.187351285 podStartE2EDuration="43.988258702s" podCreationTimestamp="2024-12-13 01:05:38 +0000 UTC" firstStartedPulling="2024-12-13 01:06:11.507154204 +0000 UTC m=+56.048138504" lastFinishedPulling="2024-12-13 01:06:21.308061621 +0000 UTC m=+65.849045921" observedRunningTime="2024-12-13 01:06:21.986809971 +0000 UTC m=+66.527794371" watchObservedRunningTime="2024-12-13 01:06:21.988258702 +0000 UTC m=+66.529243102" Dec 13 01:06:23.211296 systemd[1]: run-containerd-runc-k8s.io-f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3-runc.TJy8lc.mount: Deactivated successfully. Dec 13 01:06:53.198995 systemd[1]: run-containerd-runc-k8s.io-f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3-runc.PqhXQH.mount: Deactivated successfully. Dec 13 01:07:12.787042 systemd[1]: run-containerd-runc-k8s.io-f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3-runc.IdXosI.mount: Deactivated successfully. Dec 13 01:07:15.916752 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.16.10:54566.service - OpenSSH per-connection server daemon (10.200.16.10:54566). Dec 13 01:07:16.546515 sshd[6134]: Accepted publickey for core from 10.200.16.10 port 54566 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:16.548392 sshd[6134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:16.553862 systemd-logind[1788]: New session 10 of user core. Dec 13 01:07:16.562585 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 01:07:17.132419 sshd[6134]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:17.138874 systemd[1]: sshd@7-10.200.8.40:22-10.200.16.10:54566.service: Deactivated successfully. Dec 13 01:07:17.144051 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 01:07:17.145131 systemd-logind[1788]: Session 10 logged out. Waiting for processes to exit. Dec 13 01:07:17.146912 systemd-logind[1788]: Removed session 10. Dec 13 01:07:22.240084 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.16.10:40822.service - OpenSSH per-connection server daemon (10.200.16.10:40822). Dec 13 01:07:22.873392 sshd[6155]: Accepted publickey for core from 10.200.16.10 port 40822 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:22.875577 sshd[6155]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:22.881669 systemd-logind[1788]: New session 11 of user core. Dec 13 01:07:22.887741 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 01:07:23.207347 systemd[1]: run-containerd-runc-k8s.io-f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3-runc.4gH3Be.mount: Deactivated successfully. Dec 13 01:07:23.399358 sshd[6155]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:23.405944 systemd[1]: sshd@8-10.200.8.40:22-10.200.16.10:40822.service: Deactivated successfully. Dec 13 01:07:23.410555 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 01:07:23.411555 systemd-logind[1788]: Session 11 logged out. Waiting for processes to exit. Dec 13 01:07:23.412749 systemd-logind[1788]: Removed session 11. Dec 13 01:07:23.899707 systemd[1]: run-containerd-runc-k8s.io-2026d73656ebab7ad3f0509a3bb114de11789b9105904b322e0cd2a8ddfcdfb2-runc.bBrBHb.mount: Deactivated successfully. Dec 13 01:07:28.524749 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.16.10:40836.service - OpenSSH per-connection server daemon (10.200.16.10:40836). Dec 13 01:07:29.158918 sshd[6217]: Accepted publickey for core from 10.200.16.10 port 40836 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:29.160935 sshd[6217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:29.166534 systemd-logind[1788]: New session 12 of user core. Dec 13 01:07:29.172568 systemd[1]: Started session-12.scope - Session 12 of User core. Dec 13 01:07:29.666034 sshd[6217]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:29.669930 systemd[1]: sshd@9-10.200.8.40:22-10.200.16.10:40836.service: Deactivated successfully. Dec 13 01:07:29.676444 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 01:07:29.678416 systemd-logind[1788]: Session 12 logged out. Waiting for processes to exit. Dec 13 01:07:29.679590 systemd-logind[1788]: Removed session 12. Dec 13 01:07:29.775634 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.16.10:36856.service - OpenSSH per-connection server daemon (10.200.16.10:36856). Dec 13 01:07:30.404799 sshd[6231]: Accepted publickey for core from 10.200.16.10 port 36856 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:30.406785 sshd[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:30.412456 systemd-logind[1788]: New session 13 of user core. Dec 13 01:07:30.417701 systemd[1]: Started session-13.scope - Session 13 of User core. Dec 13 01:07:30.965882 sshd[6231]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:30.970288 systemd[1]: sshd@10-10.200.8.40:22-10.200.16.10:36856.service: Deactivated successfully. Dec 13 01:07:30.977819 systemd-logind[1788]: Session 13 logged out. Waiting for processes to exit. Dec 13 01:07:30.978654 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 01:07:30.979842 systemd-logind[1788]: Removed session 13. Dec 13 01:07:31.074681 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.16.10:36862.service - OpenSSH per-connection server daemon (10.200.16.10:36862). Dec 13 01:07:31.702193 sshd[6245]: Accepted publickey for core from 10.200.16.10 port 36862 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:31.704109 sshd[6245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:31.709605 systemd-logind[1788]: New session 14 of user core. Dec 13 01:07:31.718562 systemd[1]: Started session-14.scope - Session 14 of User core. Dec 13 01:07:32.224280 sshd[6245]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:32.230034 systemd[1]: sshd@11-10.200.8.40:22-10.200.16.10:36862.service: Deactivated successfully. Dec 13 01:07:32.234697 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 01:07:32.235629 systemd-logind[1788]: Session 14 logged out. Waiting for processes to exit. Dec 13 01:07:32.236806 systemd-logind[1788]: Removed session 14. Dec 13 01:07:37.332656 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.16.10:36872.service - OpenSSH per-connection server daemon (10.200.16.10:36872). Dec 13 01:07:37.959827 sshd[6265]: Accepted publickey for core from 10.200.16.10 port 36872 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:37.961824 sshd[6265]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:37.966597 systemd-logind[1788]: New session 15 of user core. Dec 13 01:07:37.970501 systemd[1]: Started session-15.scope - Session 15 of User core. Dec 13 01:07:38.471495 sshd[6265]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:38.476686 systemd[1]: sshd@12-10.200.8.40:22-10.200.16.10:36872.service: Deactivated successfully. Dec 13 01:07:38.482594 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 01:07:38.483735 systemd-logind[1788]: Session 15 logged out. Waiting for processes to exit. Dec 13 01:07:38.484796 systemd-logind[1788]: Removed session 15. Dec 13 01:07:43.579676 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.16.10:34004.service - OpenSSH per-connection server daemon (10.200.16.10:34004). Dec 13 01:07:44.211525 sshd[6291]: Accepted publickey for core from 10.200.16.10 port 34004 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:44.213846 sshd[6291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:44.222436 systemd-logind[1788]: New session 16 of user core. Dec 13 01:07:44.228562 systemd[1]: Started session-16.scope - Session 16 of User core. Dec 13 01:07:44.726018 sshd[6291]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:44.731905 systemd[1]: sshd@13-10.200.8.40:22-10.200.16.10:34004.service: Deactivated successfully. Dec 13 01:07:44.738100 systemd-logind[1788]: Session 16 logged out. Waiting for processes to exit. Dec 13 01:07:44.739012 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 01:07:44.742469 systemd-logind[1788]: Removed session 16. Dec 13 01:07:49.836592 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.16.10:50316.service - OpenSSH per-connection server daemon (10.200.16.10:50316). Dec 13 01:07:50.457193 sshd[6309]: Accepted publickey for core from 10.200.16.10 port 50316 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:50.459041 sshd[6309]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:50.464707 systemd-logind[1788]: New session 17 of user core. Dec 13 01:07:50.472627 systemd[1]: Started session-17.scope - Session 17 of User core. Dec 13 01:07:50.966969 sshd[6309]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:50.975546 systemd[1]: sshd@14-10.200.8.40:22-10.200.16.10:50316.service: Deactivated successfully. Dec 13 01:07:50.988725 systemd-logind[1788]: Session 17 logged out. Waiting for processes to exit. Dec 13 01:07:50.990053 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 01:07:50.992556 systemd-logind[1788]: Removed session 17. Dec 13 01:07:56.075014 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.16.10:50320.service - OpenSSH per-connection server daemon (10.200.16.10:50320). Dec 13 01:07:56.703118 sshd[6365]: Accepted publickey for core from 10.200.16.10 port 50320 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:07:56.705156 sshd[6365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:07:56.710527 systemd-logind[1788]: New session 18 of user core. Dec 13 01:07:56.715561 systemd[1]: Started session-18.scope - Session 18 of User core. Dec 13 01:07:57.213947 sshd[6365]: pam_unix(sshd:session): session closed for user core Dec 13 01:07:57.221142 systemd[1]: sshd@15-10.200.8.40:22-10.200.16.10:50320.service: Deactivated successfully. Dec 13 01:07:57.226064 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 01:07:57.227359 systemd-logind[1788]: Session 18 logged out. Waiting for processes to exit. Dec 13 01:07:57.228867 systemd-logind[1788]: Removed session 18. Dec 13 01:08:02.322829 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.16.10:48742.service - OpenSSH per-connection server daemon (10.200.16.10:48742). Dec 13 01:08:02.955124 sshd[6381]: Accepted publickey for core from 10.200.16.10 port 48742 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:02.957236 sshd[6381]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:02.963186 systemd-logind[1788]: New session 19 of user core. Dec 13 01:08:02.967625 systemd[1]: Started session-19.scope - Session 19 of User core. Dec 13 01:08:03.476647 sshd[6381]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:03.481435 systemd[1]: sshd@16-10.200.8.40:22-10.200.16.10:48742.service: Deactivated successfully. Dec 13 01:08:03.487572 systemd-logind[1788]: Session 19 logged out. Waiting for processes to exit. Dec 13 01:08:03.488513 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 01:08:03.490651 systemd-logind[1788]: Removed session 19. Dec 13 01:08:03.593387 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.16.10:48748.service - OpenSSH per-connection server daemon (10.200.16.10:48748). Dec 13 01:08:04.226570 sshd[6395]: Accepted publickey for core from 10.200.16.10 port 48748 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:04.229088 sshd[6395]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:04.234700 systemd-logind[1788]: New session 20 of user core. Dec 13 01:08:04.240487 systemd[1]: Started session-20.scope - Session 20 of User core. Dec 13 01:08:04.803851 sshd[6395]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:04.810989 systemd[1]: sshd@17-10.200.8.40:22-10.200.16.10:48748.service: Deactivated successfully. Dec 13 01:08:04.820333 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 01:08:04.822843 systemd-logind[1788]: Session 20 logged out. Waiting for processes to exit. Dec 13 01:08:04.824307 systemd-logind[1788]: Removed session 20. Dec 13 01:08:04.910552 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.16.10:48750.service - OpenSSH per-connection server daemon (10.200.16.10:48750). Dec 13 01:08:05.533652 sshd[6406]: Accepted publickey for core from 10.200.16.10 port 48750 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:05.535742 sshd[6406]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:05.541249 systemd-logind[1788]: New session 21 of user core. Dec 13 01:08:05.548566 systemd[1]: Started session-21.scope - Session 21 of User core. Dec 13 01:08:07.989185 sshd[6406]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:07.994027 systemd[1]: sshd@18-10.200.8.40:22-10.200.16.10:48750.service: Deactivated successfully. Dec 13 01:08:08.001366 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 01:08:08.002539 systemd-logind[1788]: Session 21 logged out. Waiting for processes to exit. Dec 13 01:08:08.004110 systemd-logind[1788]: Removed session 21. Dec 13 01:08:08.096820 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.16.10:48762.service - OpenSSH per-connection server daemon (10.200.16.10:48762). Dec 13 01:08:08.729634 sshd[6426]: Accepted publickey for core from 10.200.16.10 port 48762 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:08.731605 sshd[6426]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:08.737535 systemd-logind[1788]: New session 22 of user core. Dec 13 01:08:08.739808 systemd[1]: Started session-22.scope - Session 22 of User core. Dec 13 01:08:09.358263 sshd[6426]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:09.363606 systemd[1]: sshd@19-10.200.8.40:22-10.200.16.10:48762.service: Deactivated successfully. Dec 13 01:08:09.365029 systemd-logind[1788]: Session 22 logged out. Waiting for processes to exit. Dec 13 01:08:09.370634 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 01:08:09.373378 systemd-logind[1788]: Removed session 22. Dec 13 01:08:09.465595 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.16.10:45652.service - OpenSSH per-connection server daemon (10.200.16.10:45652). Dec 13 01:08:10.095741 sshd[6438]: Accepted publickey for core from 10.200.16.10 port 45652 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:10.097755 sshd[6438]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:10.104672 systemd-logind[1788]: New session 23 of user core. Dec 13 01:08:10.108514 systemd[1]: Started session-23.scope - Session 23 of User core. Dec 13 01:08:10.608924 sshd[6438]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:10.614536 systemd[1]: sshd@20-10.200.8.40:22-10.200.16.10:45652.service: Deactivated successfully. Dec 13 01:08:10.620447 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 01:08:10.621481 systemd-logind[1788]: Session 23 logged out. Waiting for processes to exit. Dec 13 01:08:10.622743 systemd-logind[1788]: Removed session 23. Dec 13 01:08:12.814011 systemd[1]: run-containerd-runc-k8s.io-f4a3ed6dd9cfd295370b588801c627a4f07f571bb37699d2c97a27988d7730c3-runc.DRJUqy.mount: Deactivated successfully. Dec 13 01:08:15.717925 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.16.10:45668.service - OpenSSH per-connection server daemon (10.200.16.10:45668). Dec 13 01:08:16.341110 sshd[6475]: Accepted publickey for core from 10.200.16.10 port 45668 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:16.344117 sshd[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:16.349130 systemd-logind[1788]: New session 24 of user core. Dec 13 01:08:16.355815 systemd[1]: Started session-24.scope - Session 24 of User core. Dec 13 01:08:16.855666 sshd[6475]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:16.859362 systemd[1]: sshd@21-10.200.8.40:22-10.200.16.10:45668.service: Deactivated successfully. Dec 13 01:08:16.866983 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 01:08:16.869191 systemd-logind[1788]: Session 24 logged out. Waiting for processes to exit. Dec 13 01:08:16.870575 systemd-logind[1788]: Removed session 24. Dec 13 01:08:21.967586 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.16.10:59652.service - OpenSSH per-connection server daemon (10.200.16.10:59652). Dec 13 01:08:22.601940 sshd[6489]: Accepted publickey for core from 10.200.16.10 port 59652 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:22.603868 sshd[6489]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:22.609600 systemd-logind[1788]: New session 25 of user core. Dec 13 01:08:22.614853 systemd[1]: Started session-25.scope - Session 25 of User core. Dec 13 01:08:23.107991 sshd[6489]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:23.113707 systemd[1]: sshd@22-10.200.8.40:22-10.200.16.10:59652.service: Deactivated successfully. Dec 13 01:08:23.119378 systemd-logind[1788]: Session 25 logged out. Waiting for processes to exit. Dec 13 01:08:23.120222 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 01:08:23.121494 systemd-logind[1788]: Removed session 25. Dec 13 01:08:28.214596 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.16.10:59668.service - OpenSSH per-connection server daemon (10.200.16.10:59668). Dec 13 01:08:28.844104 sshd[6543]: Accepted publickey for core from 10.200.16.10 port 59668 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:28.845883 sshd[6543]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:28.853284 systemd-logind[1788]: New session 26 of user core. Dec 13 01:08:28.857750 systemd[1]: Started session-26.scope - Session 26 of User core. Dec 13 01:08:29.349594 sshd[6543]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:29.354633 systemd[1]: sshd@23-10.200.8.40:22-10.200.16.10:59668.service: Deactivated successfully. Dec 13 01:08:29.360032 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 01:08:29.360390 systemd-logind[1788]: Session 26 logged out. Waiting for processes to exit. Dec 13 01:08:29.362371 systemd-logind[1788]: Removed session 26. Dec 13 01:08:34.457691 systemd[1]: Started sshd@24-10.200.8.40:22-10.200.16.10:52882.service - OpenSSH per-connection server daemon (10.200.16.10:52882). Dec 13 01:08:35.089592 sshd[6560]: Accepted publickey for core from 10.200.16.10 port 52882 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:35.091702 sshd[6560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:35.096697 systemd-logind[1788]: New session 27 of user core. Dec 13 01:08:35.103485 systemd[1]: Started session-27.scope - Session 27 of User core. Dec 13 01:08:35.601504 sshd[6560]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:35.606544 systemd[1]: sshd@24-10.200.8.40:22-10.200.16.10:52882.service: Deactivated successfully. Dec 13 01:08:35.612028 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 01:08:35.612413 systemd-logind[1788]: Session 27 logged out. Waiting for processes to exit. Dec 13 01:08:35.613957 systemd-logind[1788]: Removed session 27. Dec 13 01:08:40.713821 systemd[1]: Started sshd@25-10.200.8.40:22-10.200.16.10:36560.service - OpenSSH per-connection server daemon (10.200.16.10:36560). Dec 13 01:08:41.337271 sshd[6575]: Accepted publickey for core from 10.200.16.10 port 36560 ssh2: RSA SHA256:XU24JaPrxoJ28UtO/mU1KRbPH4i7hP4R09dYxwYsDp4 Dec 13 01:08:41.340005 sshd[6575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 01:08:41.345725 systemd-logind[1788]: New session 28 of user core. Dec 13 01:08:41.349171 systemd[1]: Started session-28.scope - Session 28 of User core. Dec 13 01:08:41.860675 sshd[6575]: pam_unix(sshd:session): session closed for user core Dec 13 01:08:41.866171 systemd[1]: sshd@25-10.200.8.40:22-10.200.16.10:36560.service: Deactivated successfully. Dec 13 01:08:41.872079 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 01:08:41.873099 systemd-logind[1788]: Session 28 logged out. Waiting for processes to exit. Dec 13 01:08:41.874196 systemd-logind[1788]: Removed session 28.