Nov 12 20:54:46.067030 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:54:46.067056 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:46.067066 kernel: BIOS-provided physical RAM map: Nov 12 20:54:46.067075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:54:46.067080 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 12 20:54:46.067086 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 12 20:54:46.067097 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 12 20:54:46.067105 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 12 20:54:46.067113 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 12 20:54:46.067120 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 12 20:54:46.067126 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 12 20:54:46.067134 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 12 20:54:46.067142 kernel: printk: bootconsole [earlyser0] enabled Nov 12 20:54:46.067148 kernel: NX (Execute Disable) protection: active Nov 12 20:54:46.067161 kernel: APIC: Static calls initialized Nov 12 20:54:46.067168 kernel: efi: EFI v2.7 by Microsoft Nov 12 20:54:46.067176 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 12 20:54:46.067185 kernel: SMBIOS 3.1.0 present. Nov 12 20:54:46.067193 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 12 20:54:46.067199 kernel: Hypervisor detected: Microsoft Hyper-V Nov 12 20:54:46.067206 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 12 20:54:46.067213 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Nov 12 20:54:46.067219 kernel: Hyper-V: Nested features: 0x1e0101 Nov 12 20:54:46.067229 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 12 20:54:46.067238 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 12 20:54:46.067245 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:54:46.067252 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:54:46.067263 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 12 20:54:46.067271 kernel: tsc: Detected 2593.904 MHz processor Nov 12 20:54:46.067278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:54:46.067285 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:54:46.067295 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 12 20:54:46.067303 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:54:46.067312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:54:46.067329 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 12 20:54:46.067339 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 12 20:54:46.067346 kernel: Using GB pages for direct mapping Nov 12 20:54:46.067352 kernel: Secure boot disabled Nov 12 20:54:46.067362 kernel: ACPI: Early table checksum verification disabled Nov 12 20:54:46.067371 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 12 20:54:46.067382 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067393 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067403 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 12 20:54:46.067410 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 12 20:54:46.067418 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067428 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067438 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067450 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067462 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067473 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067481 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067488 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 12 20:54:46.067495 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 12 20:54:46.067503 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 12 20:54:46.067513 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 12 20:54:46.067529 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 12 20:54:46.067538 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 12 20:54:46.067546 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 12 20:54:46.067561 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 12 20:54:46.067572 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 12 20:54:46.067580 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 12 20:54:46.067587 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:54:46.067602 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:54:46.067620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 12 20:54:46.067636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 12 20:54:46.067643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 12 20:54:46.067656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 12 20:54:46.067673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 12 20:54:46.067686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 12 20:54:46.067695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 12 20:54:46.067702 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 12 20:54:46.067709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 12 20:54:46.067717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 12 20:54:46.067732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 12 20:54:46.067749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 12 20:54:46.067766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 12 20:54:46.067778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 12 20:54:46.067785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 12 20:54:46.067795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 12 20:54:46.067809 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 12 20:54:46.067823 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 12 20:54:46.067836 kernel: Zone ranges: Nov 12 20:54:46.067846 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:54:46.067856 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:54:46.067870 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:54:46.067885 kernel: Movable zone start for each node Nov 12 20:54:46.067892 kernel: Early memory node ranges Nov 12 20:54:46.067904 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:54:46.067920 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 12 20:54:46.067930 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 12 20:54:46.067938 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:54:46.067965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 12 20:54:46.067977 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:54:46.067985 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:54:46.067997 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 12 20:54:46.068013 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 12 20:54:46.068031 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 12 20:54:46.068041 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:54:46.068049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:54:46.068062 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:54:46.068081 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 12 20:54:46.068094 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:54:46.068103 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 12 20:54:46.068113 kernel: Booting paravirtualized kernel on Hyper-V Nov 12 20:54:46.068128 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:54:46.068141 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:54:46.068148 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:54:46.068162 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:54:46.068176 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:54:46.068193 kernel: Hyper-V: PV spinlocks enabled Nov 12 20:54:46.068200 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:54:46.068212 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:46.068227 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:54:46.068240 kernel: random: crng init done Nov 12 20:54:46.068249 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:54:46.068256 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:54:46.068270 kernel: Fallback order for Node 0: 0 Nov 12 20:54:46.068287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 12 20:54:46.068305 kernel: Policy zone: Normal Nov 12 20:54:46.068330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:54:46.068338 kernel: software IO TLB: area num 2. Nov 12 20:54:46.068355 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Nov 12 20:54:46.068368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:54:46.068376 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:54:46.068391 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:54:46.068408 kernel: Dynamic Preempt: voluntary Nov 12 20:54:46.068420 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:54:46.068431 kernel: rcu: RCU event tracing is enabled. Nov 12 20:54:46.068449 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:54:46.068463 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:54:46.068472 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:54:46.068485 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:54:46.068502 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:54:46.068514 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:54:46.068524 kernel: Using NULL legacy PIC Nov 12 20:54:46.068539 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 12 20:54:46.068553 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:54:46.068561 kernel: Console: colour dummy device 80x25 Nov 12 20:54:46.068575 kernel: printk: console [tty1] enabled Nov 12 20:54:46.068590 kernel: printk: console [ttyS0] enabled Nov 12 20:54:46.068604 kernel: printk: bootconsole [earlyser0] disabled Nov 12 20:54:46.068612 kernel: ACPI: Core revision 20230628 Nov 12 20:54:46.068627 kernel: Failed to register legacy timer interrupt Nov 12 20:54:46.068650 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:54:46.068662 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 12 20:54:46.068670 kernel: Hyper-V: Using IPI hypercalls Nov 12 20:54:46.068686 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 12 20:54:46.068699 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 12 20:54:46.068707 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 12 20:54:46.068723 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 12 20:54:46.068737 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 12 20:54:46.068750 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 12 20:54:46.068771 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Nov 12 20:54:46.068792 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:54:46.068812 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:54:46.068830 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:54:46.068847 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:54:46.068862 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:54:46.068880 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:54:46.068896 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:54:46.068910 kernel: RETBleed: Vulnerable Nov 12 20:54:46.068928 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:54:46.068944 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:54:46.068962 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:54:46.068980 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:54:46.068997 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:54:46.069014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:54:46.069033 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:54:46.069051 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:54:46.069067 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:54:46.069083 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:54:46.069099 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:54:46.069123 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 12 20:54:46.069138 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 12 20:54:46.069153 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 12 20:54:46.069168 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 12 20:54:46.069183 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:54:46.069198 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:54:46.069218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:54:46.069234 kernel: landlock: Up and running. Nov 12 20:54:46.069251 kernel: SELinux: Initializing. Nov 12 20:54:46.069267 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.069286 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.069303 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:54:46.069336 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:54:46.069352 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:54:46.069368 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:54:46.069389 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:54:46.069408 kernel: signal: max sigframe size: 3632 Nov 12 20:54:46.069423 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:54:46.069441 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:54:46.069456 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:54:46.069469 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:54:46.069486 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:54:46.069502 kernel: .... node #0, CPUs: #1 Nov 12 20:54:46.069517 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 12 20:54:46.069533 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:54:46.069549 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:54:46.069573 kernel: smpboot: Max logical packages: 1 Nov 12 20:54:46.069589 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Nov 12 20:54:46.069605 kernel: devtmpfs: initialized Nov 12 20:54:46.069623 kernel: x86/mm: Memory block size: 128MB Nov 12 20:54:46.069638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 12 20:54:46.069653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:54:46.069668 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:54:46.069684 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:54:46.069699 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:54:46.069714 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:54:46.069728 kernel: audit: type=2000 audit(1731444885.027:1): state=initialized audit_enabled=0 res=1 Nov 12 20:54:46.069744 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:54:46.069761 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:54:46.069775 kernel: cpuidle: using governor menu Nov 12 20:54:46.069788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:54:46.069802 kernel: dca service started, version 1.12.1 Nov 12 20:54:46.069815 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 12 20:54:46.069829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:54:46.069843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:54:46.069857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:54:46.069870 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:54:46.069887 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:54:46.069900 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:54:46.069914 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:54:46.069928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:54:46.069943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:54:46.069956 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:54:46.069970 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:54:46.069983 kernel: ACPI: Interpreter enabled Nov 12 20:54:46.069997 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:54:46.070016 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:54:46.070031 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:54:46.070046 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:54:46.070060 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 12 20:54:46.070074 kernel: iommu: Default domain type: Translated Nov 12 20:54:46.070089 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:54:46.070103 kernel: efivars: Registered efivars operations Nov 12 20:54:46.070117 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:54:46.070131 kernel: PCI: System does not support PCI Nov 12 20:54:46.070148 kernel: vgaarb: loaded Nov 12 20:54:46.070162 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 12 20:54:46.070176 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:54:46.070191 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:54:46.070205 kernel: pnp: PnP ACPI init Nov 12 20:54:46.070219 kernel: pnp: PnP ACPI: found 3 devices Nov 12 20:54:46.070233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:54:46.070248 kernel: NET: Registered PF_INET protocol family Nov 12 20:54:46.070262 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:54:46.070279 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:54:46.070294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:54:46.070309 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:54:46.070348 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:54:46.070363 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:54:46.070377 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.070391 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.070406 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:54:46.070421 kernel: NET: Registered PF_XDP protocol family Nov 12 20:54:46.070439 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:54:46.070453 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:54:46.070468 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Nov 12 20:54:46.070482 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:54:46.070497 kernel: Initialise system trusted keyrings Nov 12 20:54:46.070511 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:54:46.070526 kernel: Key type asymmetric registered Nov 12 20:54:46.070540 kernel: Asymmetric key parser 'x509' registered Nov 12 20:54:46.070555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:54:46.070572 kernel: io scheduler mq-deadline registered Nov 12 20:54:46.070587 kernel: io scheduler kyber registered Nov 12 20:54:46.070602 kernel: io scheduler bfq registered Nov 12 20:54:46.070616 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:54:46.070631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:54:46.070646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:54:46.070661 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:54:46.070675 kernel: i8042: PNP: No PS/2 controller found. Nov 12 20:54:46.070850 kernel: rtc_cmos 00:02: registered as rtc0 Nov 12 20:54:46.070989 kernel: rtc_cmos 00:02: setting system clock to 2024-11-12T20:54:45 UTC (1731444885) Nov 12 20:54:46.071109 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 12 20:54:46.071129 kernel: intel_pstate: CPU model not supported Nov 12 20:54:46.071144 kernel: efifb: probing for efifb Nov 12 20:54:46.071159 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 12 20:54:46.071174 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 12 20:54:46.071188 kernel: efifb: scrolling: redraw Nov 12 20:54:46.071206 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 20:54:46.071221 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:54:46.071235 kernel: fb0: EFI VGA frame buffer device Nov 12 20:54:46.071250 kernel: pstore: Using crash dump compression: deflate Nov 12 20:54:46.071264 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:54:46.071278 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:54:46.071293 kernel: Segment Routing with IPv6 Nov 12 20:54:46.071307 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:54:46.071362 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:54:46.071377 kernel: Key type dns_resolver registered Nov 12 20:54:46.071396 kernel: IPI shorthand broadcast: enabled Nov 12 20:54:46.071412 kernel: sched_clock: Marking stable (881003200, 45688000)->(1138098500, -211407300) Nov 12 20:54:46.071425 kernel: registered taskstats version 1 Nov 12 20:54:46.071439 kernel: Loading compiled-in X.509 certificates Nov 12 20:54:46.071453 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:54:46.071468 kernel: Key type .fscrypt registered Nov 12 20:54:46.071485 kernel: Key type fscrypt-provisioning registered Nov 12 20:54:46.071500 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:54:46.071520 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:54:46.071535 kernel: ima: No architecture policies found Nov 12 20:54:46.071550 kernel: clk: Disabling unused clocks Nov 12 20:54:46.071566 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:54:46.071583 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:54:46.071600 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:54:46.071614 kernel: Run /init as init process Nov 12 20:54:46.071629 kernel: with arguments: Nov 12 20:54:46.071644 kernel: /init Nov 12 20:54:46.071662 kernel: with environment: Nov 12 20:54:46.071678 kernel: HOME=/ Nov 12 20:54:46.071694 kernel: TERM=linux Nov 12 20:54:46.071711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:54:46.071730 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:54:46.071748 systemd[1]: Detected virtualization microsoft. Nov 12 20:54:46.071763 systemd[1]: Detected architecture x86-64. Nov 12 20:54:46.071777 systemd[1]: Running in initrd. Nov 12 20:54:46.071795 systemd[1]: No hostname configured, using default hostname. Nov 12 20:54:46.071809 systemd[1]: Hostname set to . Nov 12 20:54:46.071825 systemd[1]: Initializing machine ID from random generator. Nov 12 20:54:46.071838 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:54:46.071854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:46.071868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:46.071885 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:54:46.071901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:54:46.071920 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:54:46.071936 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:54:46.071954 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:54:46.071970 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:54:46.071986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:46.072002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:46.072018 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:54:46.072036 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:54:46.072052 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:54:46.072068 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:54:46.072084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:54:46.072100 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:54:46.072117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:54:46.072132 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:54:46.072148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:46.072166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:46.072182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:46.072198 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:54:46.072214 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:54:46.072230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:54:46.072246 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:54:46.072262 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:54:46.072277 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:54:46.072293 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:54:46.072355 systemd-journald[176]: Collecting audit messages is disabled. Nov 12 20:54:46.072390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:46.072406 systemd-journald[176]: Journal started Nov 12 20:54:46.072443 systemd-journald[176]: Runtime Journal (/run/log/journal/d8f39417754c48969195639bc3e97be9) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:54:46.084646 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:54:46.090464 systemd-modules-load[177]: Inserted module 'overlay' Nov 12 20:54:46.093488 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:54:46.100495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:46.104010 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:54:46.111459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:46.128519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:46.135868 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:54:46.139338 kernel: Bridge firewalling registered Nov 12 20:54:46.140394 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 12 20:54:46.143490 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:54:46.156478 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:54:46.165528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:46.171314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:46.178137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:54:46.181503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:46.196461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:54:46.207472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:46.211957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:54:46.224005 dracut-cmdline[203]: dracut-dracut-053 Nov 12 20:54:46.228343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:46.234597 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:46.252234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:46.263542 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:54:46.303828 systemd-resolved[255]: Positive Trust Anchors: Nov 12 20:54:46.303842 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:54:46.303899 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:54:46.329375 systemd-resolved[255]: Defaulting to hostname 'linux'. Nov 12 20:54:46.330650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:54:46.335619 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:46.347334 kernel: SCSI subsystem initialized Nov 12 20:54:46.357335 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:54:46.368338 kernel: iscsi: registered transport (tcp) Nov 12 20:54:46.389234 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:54:46.389296 kernel: QLogic iSCSI HBA Driver Nov 12 20:54:46.423956 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:54:46.432524 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:54:46.461794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:54:46.461875 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:54:46.465003 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:54:46.505347 kernel: raid6: avx512x4 gen() 18253 MB/s Nov 12 20:54:46.524335 kernel: raid6: avx512x2 gen() 18175 MB/s Nov 12 20:54:46.543331 kernel: raid6: avx512x1 gen() 18166 MB/s Nov 12 20:54:46.562329 kernel: raid6: avx2x4 gen() 18083 MB/s Nov 12 20:54:46.581336 kernel: raid6: avx2x2 gen() 18098 MB/s Nov 12 20:54:46.601160 kernel: raid6: avx2x1 gen() 13892 MB/s Nov 12 20:54:46.601196 kernel: raid6: using algorithm avx512x4 gen() 18253 MB/s Nov 12 20:54:46.621982 kernel: raid6: .... xor() 8271 MB/s, rmw enabled Nov 12 20:54:46.622013 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:54:46.644343 kernel: xor: automatically using best checksumming function avx Nov 12 20:54:46.791342 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:54:46.801165 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:54:46.811502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:46.823048 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 12 20:54:46.827438 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:46.837461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:54:46.854735 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 12 20:54:46.881481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:54:46.897479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:54:46.936939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:46.951486 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:54:46.981299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:54:46.988570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:54:46.995840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:47.002395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:54:47.013520 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:54:47.027339 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:54:47.045788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:54:47.056088 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:54:47.056144 kernel: AES CTR mode by8 optimization enabled Nov 12 20:54:47.056943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:54:47.059949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:47.067757 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:47.074006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:47.084512 kernel: hv_vmbus: Vmbus version:5.2 Nov 12 20:54:47.074283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:47.099314 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 20:54:47.099351 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 20:54:47.079460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:47.101699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:47.116398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:47.116531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:47.134208 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 12 20:54:47.134237 kernel: PTP clock support registered Nov 12 20:54:47.138198 kernel: hv_utils: Registering HyperV Utility Driver Nov 12 20:54:47.138249 kernel: hv_vmbus: registering driver hv_utils Nov 12 20:54:47.145339 kernel: hv_utils: Heartbeat IC version 3.0 Nov 12 20:54:47.145379 kernel: hv_utils: Shutdown IC version 3.2 Nov 12 20:54:47.151335 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 12 20:54:47.156804 kernel: hv_utils: TimeSync IC version 4.0 Nov 12 20:54:47.153946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:48.146282 systemd-resolved[255]: Clock change detected. Flushing caches. Nov 12 20:54:48.161281 kernel: hv_vmbus: registering driver hv_storvsc Nov 12 20:54:48.164789 kernel: hv_vmbus: registering driver hv_netvsc Nov 12 20:54:48.169098 kernel: scsi host1: storvsc_host_t Nov 12 20:54:48.169166 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 20:54:48.177923 kernel: scsi host0: storvsc_host_t Nov 12 20:54:48.178237 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 12 20:54:48.183247 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 12 20:54:48.193330 kernel: hv_vmbus: registering driver hid_hyperv Nov 12 20:54:48.194690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:48.204757 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 12 20:54:48.204795 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 12 20:54:48.214468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:48.226292 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 12 20:54:48.230063 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:54:48.230085 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 12 20:54:48.251327 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 12 20:54:48.278564 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 12 20:54:48.278742 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:54:48.278897 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 12 20:54:48.279054 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 12 20:54:48.280275 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: VF slot 1 added Nov 12 20:54:48.280458 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:48.280485 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:54:48.251292 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:48.295562 kernel: hv_vmbus: registering driver hv_pci Nov 12 20:54:48.295622 kernel: hv_pci 0bae10c5-e4d2-43d0-9694-dd32d28d965d: PCI VMBus probing: Using version 0x10004 Nov 12 20:54:48.340596 kernel: hv_pci 0bae10c5-e4d2-43d0-9694-dd32d28d965d: PCI host bridge to bus e4d2:00 Nov 12 20:54:48.341024 kernel: pci_bus e4d2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 12 20:54:48.341303 kernel: pci_bus e4d2:00: No busn resource found for root bus, will use [bus 00-ff] Nov 12 20:54:48.341536 kernel: pci e4d2:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 12 20:54:48.341726 kernel: pci e4d2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:54:48.341895 kernel: pci e4d2:00:02.0: enabling Extended Tags Nov 12 20:54:48.342121 kernel: pci e4d2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e4d2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 12 20:54:48.342369 kernel: pci_bus e4d2:00: busn_res: [bus 00-ff] end is updated to 00 Nov 12 20:54:48.342546 kernel: pci e4d2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:54:48.430210 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (448) Nov 12 20:54:48.444207 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Nov 12 20:54:48.480631 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 12 20:54:48.499440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:54:48.518254 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 12 20:54:48.525026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 12 20:54:48.547776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 12 20:54:48.585443 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:54:48.598679 kernel: mlx5_core e4d2:00:02.0: enabling device (0000 -> 0002) Nov 12 20:54:48.834358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:48.834386 kernel: mlx5_core e4d2:00:02.0: firmware version: 14.30.1284 Nov 12 20:54:48.834576 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:48.834595 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: VF registering: eth1 Nov 12 20:54:48.834741 kernel: mlx5_core e4d2:00:02.0 eth1: joined to eth0 Nov 12 20:54:48.834920 kernel: mlx5_core e4d2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 20:54:48.844210 kernel: mlx5_core e4d2:00:02.0 enP58578s1: renamed from eth1 Nov 12 20:54:49.619209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:49.619799 disk-uuid[597]: The operation has completed successfully. Nov 12 20:54:49.713722 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:54:49.713834 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:54:49.735375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:54:49.741490 sh[716]: Success Nov 12 20:54:49.758374 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:54:49.823883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:54:49.833301 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:54:49.838106 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:54:49.853210 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:54:49.853250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:49.858381 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:54:49.861103 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:54:49.863584 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:54:49.925270 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:54:49.926163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:54:49.937385 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:54:49.943353 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:54:49.960281 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:49.960348 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:49.963253 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:54:49.972251 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:54:49.983377 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:54:49.988238 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:49.993454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:54:50.004360 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:54:50.035075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:54:50.047356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:54:50.073106 systemd-networkd[900]: lo: Link UP Nov 12 20:54:50.073116 systemd-networkd[900]: lo: Gained carrier Nov 12 20:54:50.078934 systemd-networkd[900]: Enumeration completed Nov 12 20:54:50.079045 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:54:50.082162 systemd[1]: Reached target network.target - Network. Nov 12 20:54:50.091465 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:50.091470 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:50.153300 kernel: mlx5_core e4d2:00:02.0 enP58578s1: Link up Nov 12 20:54:50.187474 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: Data path switched to VF: enP58578s1 Nov 12 20:54:50.189570 systemd-networkd[900]: enP58578s1: Link UP Nov 12 20:54:50.189709 systemd-networkd[900]: eth0: Link UP Nov 12 20:54:50.189868 systemd-networkd[900]: eth0: Gained carrier Nov 12 20:54:50.189881 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:50.195889 systemd-networkd[900]: enP58578s1: Gained carrier Nov 12 20:54:50.251257 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:54:50.264535 ignition[851]: Ignition 2.19.0 Nov 12 20:54:50.264546 ignition[851]: Stage: fetch-offline Nov 12 20:54:50.267965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:54:50.264585 ignition[851]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.264596 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.264717 ignition[851]: parsed url from cmdline: "" Nov 12 20:54:50.264722 ignition[851]: no config URL provided Nov 12 20:54:50.264730 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:54:50.282352 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:54:50.264741 ignition[851]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:54:50.264748 ignition[851]: failed to fetch config: resource requires networking Nov 12 20:54:50.264959 ignition[851]: Ignition finished successfully Nov 12 20:54:50.306429 ignition[909]: Ignition 2.19.0 Nov 12 20:54:50.306440 ignition[909]: Stage: fetch Nov 12 20:54:50.306663 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.306674 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.306763 ignition[909]: parsed url from cmdline: "" Nov 12 20:54:50.306767 ignition[909]: no config URL provided Nov 12 20:54:50.306771 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:54:50.306780 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:54:50.306801 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 12 20:54:50.394849 ignition[909]: GET result: OK Nov 12 20:54:50.394952 ignition[909]: config has been read from IMDS userdata Nov 12 20:54:50.394985 ignition[909]: parsing config with SHA512: 5258f6c496d30311d6f2469f830c3ca62deed3bdc800d814e80f7c603dcc370a4d8c5bf2a6ecc34fe885677d1b1db100ae0e9f9142e9bb5a04eba876e267acfd Nov 12 20:54:50.399263 unknown[909]: fetched base config from "system" Nov 12 20:54:50.399272 unknown[909]: fetched base config from "system" Nov 12 20:54:50.399700 ignition[909]: fetch: fetch complete Nov 12 20:54:50.399279 unknown[909]: fetched user config from "azure" Nov 12 20:54:50.399704 ignition[909]: fetch: fetch passed Nov 12 20:54:50.401442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:54:50.399748 ignition[909]: Ignition finished successfully Nov 12 20:54:50.415333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:54:50.435924 ignition[915]: Ignition 2.19.0 Nov 12 20:54:50.435934 ignition[915]: Stage: kargs Nov 12 20:54:50.438993 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:54:50.436166 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.436181 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.437070 ignition[915]: kargs: kargs passed Nov 12 20:54:50.437114 ignition[915]: Ignition finished successfully Nov 12 20:54:50.454770 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:54:50.470489 ignition[921]: Ignition 2.19.0 Nov 12 20:54:50.470499 ignition[921]: Stage: disks Nov 12 20:54:50.472508 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:54:50.470725 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.470738 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.471583 ignition[921]: disks: disks passed Nov 12 20:54:50.471624 ignition[921]: Ignition finished successfully Nov 12 20:54:50.487138 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:54:50.492732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:54:50.498564 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:54:50.501091 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:54:50.505943 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:54:50.518352 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:54:50.540943 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 12 20:54:50.546743 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:54:50.558303 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:54:50.645207 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:54:50.646160 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:54:50.650861 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:54:50.667285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:54:50.678693 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Nov 12 20:54:50.672827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:54:50.683333 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:54:50.693257 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:50.693287 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:50.693300 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:54:50.694888 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:54:50.695024 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:54:50.710217 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:54:50.713650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:54:50.718376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:54:50.728348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:54:50.897739 coreos-metadata[942]: Nov 12 20:54:50.897 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:54:50.902125 coreos-metadata[942]: Nov 12 20:54:50.899 INFO Fetch successful Nov 12 20:54:50.902125 coreos-metadata[942]: Nov 12 20:54:50.899 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:54:50.909692 coreos-metadata[942]: Nov 12 20:54:50.909 INFO Fetch successful Nov 12 20:54:50.913452 coreos-metadata[942]: Nov 12 20:54:50.913 INFO wrote hostname ci-4081.2.0-a-d8aa37ea01 to /sysroot/etc/hostname Nov 12 20:54:50.919573 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:54:50.926477 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:54:50.937386 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:54:50.948084 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:54:50.953523 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:54:51.223809 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:54:51.234275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:54:51.243334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:54:51.250217 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:51.251397 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:54:51.279236 ignition[1062]: INFO : Ignition 2.19.0 Nov 12 20:54:51.279236 ignition[1062]: INFO : Stage: mount Nov 12 20:54:51.279236 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:51.279236 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:51.294412 ignition[1062]: INFO : mount: mount passed Nov 12 20:54:51.294412 ignition[1062]: INFO : Ignition finished successfully Nov 12 20:54:51.279703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:54:51.288987 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:54:51.305568 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:54:51.314615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:54:51.330206 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1074) Nov 12 20:54:51.330244 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:51.334215 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:51.338466 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:54:51.345201 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:54:51.347368 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:54:51.372500 ignition[1091]: INFO : Ignition 2.19.0 Nov 12 20:54:51.372500 ignition[1091]: INFO : Stage: files Nov 12 20:54:51.376659 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:51.376659 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:51.376659 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:54:51.386357 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:54:51.386357 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:54:51.415985 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:54:51.420810 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:54:51.420810 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:54:51.416497 unknown[1091]: wrote ssh authorized keys file for user: core Nov 12 20:54:51.430851 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:54:51.430851 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:54:51.465694 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:54:51.664367 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:54:51.664367 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:54:51.670483 systemd-networkd[900]: eth0: Gained IPv6LL Nov 12 20:54:52.026756 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:54:52.179396 systemd-networkd[900]: enP58578s1: Gained IPv6LL Nov 12 20:54:52.652257 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:52.652257 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:54:52.661716 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:54:52.667002 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:54:52.667002 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:54:52.667002 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:54:52.680015 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:54:52.683981 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:54:52.688808 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:54:52.693309 ignition[1091]: INFO : files: files passed Nov 12 20:54:52.695249 ignition[1091]: INFO : Ignition finished successfully Nov 12 20:54:52.699201 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:54:52.710357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:54:52.716281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:54:52.727169 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:54:52.727531 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:54:52.742212 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:52.742212 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:52.750596 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:52.751552 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:54:52.762051 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:54:52.772394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:54:52.796401 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:54:52.796526 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:54:52.802274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:54:52.808041 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:54:52.810735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:54:52.821340 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:54:52.835133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:54:52.845350 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:54:52.854734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:52.854987 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:52.855886 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:54:52.856337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:54:52.856471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:54:52.857204 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:54:52.857666 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:54:52.858171 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:54:52.858593 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:54:52.858991 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:54:52.859494 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:54:52.859913 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:54:52.860363 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:54:52.860770 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:54:52.861273 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:54:52.861661 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:54:52.861789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:54:52.862550 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:52.863004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:52.863389 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:54:52.900034 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:52.906040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:54:52.911026 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:54:52.926958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:54:52.929990 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:54:52.936810 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:54:52.936920 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:54:52.948730 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:54:52.953018 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:54:52.995380 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:54:52.997911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:54:52.998074 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:53.015366 ignition[1144]: INFO : Ignition 2.19.0 Nov 12 20:54:53.015366 ignition[1144]: INFO : Stage: umount Nov 12 20:54:53.032591 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:53.032591 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:53.032591 ignition[1144]: INFO : umount: umount passed Nov 12 20:54:53.032591 ignition[1144]: INFO : Ignition finished successfully Nov 12 20:54:53.016398 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:54:53.018601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:54:53.018763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:53.021916 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:54:53.022048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:54:53.029782 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:54:53.029881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:54:53.033708 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:54:53.033796 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:54:53.040148 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:54:53.040559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:54:53.044163 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:54:53.044280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:54:53.048817 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:54:53.048865 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:54:53.053502 systemd[1]: Stopped target network.target - Network. Nov 12 20:54:53.058003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:54:53.058057 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:54:53.058158 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:54:53.058984 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:54:53.065380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:53.116268 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:54:53.120589 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:54:53.127947 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:54:53.128011 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:54:53.132639 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:54:53.132690 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:54:53.137917 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:54:53.137981 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:54:53.142284 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:54:53.142340 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:54:53.147238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:54:53.152539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:54:53.161237 systemd-networkd[900]: eth0: DHCPv6 lease lost Nov 12 20:54:53.162572 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:54:53.164282 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:54:53.164385 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:54:53.170160 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:54:53.170252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:53.185136 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:54:53.191481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:54:53.191544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:54:53.205199 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:53.208817 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:54:53.208936 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:54:53.214629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:54:53.214680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:53.227926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:54:53.227989 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:53.238654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:54:53.238716 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:53.247958 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:54:53.248158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:53.261441 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:54:53.261541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:53.264520 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:54:53.264568 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:53.269525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:54:53.269580 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:54:53.278343 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:54:53.278395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:54:53.282434 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:54:53.282484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:53.306415 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:54:53.324523 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: Data path switched from VF: enP58578s1 Nov 12 20:54:53.314925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:54:53.314993 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:53.315679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:53.315717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:53.320432 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:54:53.320518 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:54:53.341262 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:54:53.341373 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:54:53.528125 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:54:53.528295 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:54:53.533536 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:54:53.539068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:54:53.539137 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:54:53.555365 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:54:53.564406 systemd[1]: Switching root. Nov 12 20:54:53.603046 systemd-journald[176]: Journal stopped Nov 12 20:54:46.067030 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:54:46.067056 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:46.067066 kernel: BIOS-provided physical RAM map: Nov 12 20:54:46.067075 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:54:46.067080 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 12 20:54:46.067086 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 12 20:54:46.067097 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 12 20:54:46.067105 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 12 20:54:46.067113 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 12 20:54:46.067120 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 12 20:54:46.067126 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 12 20:54:46.067134 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 12 20:54:46.067142 kernel: printk: bootconsole [earlyser0] enabled Nov 12 20:54:46.067148 kernel: NX (Execute Disable) protection: active Nov 12 20:54:46.067161 kernel: APIC: Static calls initialized Nov 12 20:54:46.067168 kernel: efi: EFI v2.7 by Microsoft Nov 12 20:54:46.067176 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 Nov 12 20:54:46.067185 kernel: SMBIOS 3.1.0 present. Nov 12 20:54:46.067193 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 12 20:54:46.067199 kernel: Hypervisor detected: Microsoft Hyper-V Nov 12 20:54:46.067206 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 12 20:54:46.067213 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Nov 12 20:54:46.067219 kernel: Hyper-V: Nested features: 0x1e0101 Nov 12 20:54:46.067229 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 12 20:54:46.067238 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 12 20:54:46.067245 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:54:46.067252 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:54:46.067263 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 12 20:54:46.067271 kernel: tsc: Detected 2593.904 MHz processor Nov 12 20:54:46.067278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:54:46.067285 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:54:46.067295 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 12 20:54:46.067303 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:54:46.067312 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:54:46.067329 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 12 20:54:46.067339 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 12 20:54:46.067346 kernel: Using GB pages for direct mapping Nov 12 20:54:46.067352 kernel: Secure boot disabled Nov 12 20:54:46.067362 kernel: ACPI: Early table checksum verification disabled Nov 12 20:54:46.067371 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 12 20:54:46.067382 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067393 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067403 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 12 20:54:46.067410 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 12 20:54:46.067418 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067428 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067438 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067450 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067462 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067473 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067481 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:54:46.067488 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 12 20:54:46.067495 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 12 20:54:46.067503 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 12 20:54:46.067513 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 12 20:54:46.067529 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 12 20:54:46.067538 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 12 20:54:46.067546 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 12 20:54:46.067561 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 12 20:54:46.067572 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 12 20:54:46.067580 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 12 20:54:46.067587 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:54:46.067602 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:54:46.067620 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 12 20:54:46.067636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 12 20:54:46.067643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 12 20:54:46.067656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 12 20:54:46.067673 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 12 20:54:46.067686 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 12 20:54:46.067695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 12 20:54:46.067702 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 12 20:54:46.067709 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 12 20:54:46.067717 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 12 20:54:46.067732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 12 20:54:46.067749 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 12 20:54:46.067766 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 12 20:54:46.067778 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 12 20:54:46.067785 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 12 20:54:46.067795 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 12 20:54:46.067809 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 12 20:54:46.067823 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 12 20:54:46.067836 kernel: Zone ranges: Nov 12 20:54:46.067846 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:54:46.067856 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:54:46.067870 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:54:46.067885 kernel: Movable zone start for each node Nov 12 20:54:46.067892 kernel: Early memory node ranges Nov 12 20:54:46.067904 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:54:46.067920 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 12 20:54:46.067930 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 12 20:54:46.067938 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:54:46.067965 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 12 20:54:46.067977 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:54:46.067985 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:54:46.067997 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 12 20:54:46.068013 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 12 20:54:46.068031 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 12 20:54:46.068041 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:54:46.068049 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:54:46.068062 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:54:46.068081 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 12 20:54:46.068094 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:54:46.068103 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 12 20:54:46.068113 kernel: Booting paravirtualized kernel on Hyper-V Nov 12 20:54:46.068128 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:54:46.068141 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:54:46.068148 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:54:46.068162 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:54:46.068176 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:54:46.068193 kernel: Hyper-V: PV spinlocks enabled Nov 12 20:54:46.068200 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:54:46.068212 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:46.068227 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:54:46.068240 kernel: random: crng init done Nov 12 20:54:46.068249 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:54:46.068256 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:54:46.068270 kernel: Fallback order for Node 0: 0 Nov 12 20:54:46.068287 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 12 20:54:46.068305 kernel: Policy zone: Normal Nov 12 20:54:46.068330 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:54:46.068338 kernel: software IO TLB: area num 2. Nov 12 20:54:46.068355 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Nov 12 20:54:46.068368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:54:46.068376 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:54:46.068391 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:54:46.068408 kernel: Dynamic Preempt: voluntary Nov 12 20:54:46.068420 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:54:46.068431 kernel: rcu: RCU event tracing is enabled. Nov 12 20:54:46.068449 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:54:46.068463 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:54:46.068472 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:54:46.068485 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:54:46.068502 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:54:46.068514 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:54:46.068524 kernel: Using NULL legacy PIC Nov 12 20:54:46.068539 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 12 20:54:46.068553 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:54:46.068561 kernel: Console: colour dummy device 80x25 Nov 12 20:54:46.068575 kernel: printk: console [tty1] enabled Nov 12 20:54:46.068590 kernel: printk: console [ttyS0] enabled Nov 12 20:54:46.068604 kernel: printk: bootconsole [earlyser0] disabled Nov 12 20:54:46.068612 kernel: ACPI: Core revision 20230628 Nov 12 20:54:46.068627 kernel: Failed to register legacy timer interrupt Nov 12 20:54:46.068650 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:54:46.068662 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 12 20:54:46.068670 kernel: Hyper-V: Using IPI hypercalls Nov 12 20:54:46.068686 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 12 20:54:46.068699 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 12 20:54:46.068707 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 12 20:54:46.068723 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 12 20:54:46.068737 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 12 20:54:46.068750 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 12 20:54:46.068771 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Nov 12 20:54:46.068792 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:54:46.068812 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:54:46.068830 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:54:46.068847 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:54:46.068862 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:54:46.068880 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:54:46.068896 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:54:46.068910 kernel: RETBleed: Vulnerable Nov 12 20:54:46.068928 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:54:46.068944 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:54:46.068962 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:54:46.068980 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:54:46.068997 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:54:46.069014 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:54:46.069033 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:54:46.069051 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:54:46.069067 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:54:46.069083 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:54:46.069099 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:54:46.069123 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 12 20:54:46.069138 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 12 20:54:46.069153 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 12 20:54:46.069168 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 12 20:54:46.069183 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:54:46.069198 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:54:46.069218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:54:46.069234 kernel: landlock: Up and running. Nov 12 20:54:46.069251 kernel: SELinux: Initializing. Nov 12 20:54:46.069267 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.069286 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.069303 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:54:46.069336 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:54:46.069352 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:54:46.069368 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:54:46.069389 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:54:46.069408 kernel: signal: max sigframe size: 3632 Nov 12 20:54:46.069423 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:54:46.069441 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:54:46.069456 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:54:46.069469 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:54:46.069486 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:54:46.069502 kernel: .... node #0, CPUs: #1 Nov 12 20:54:46.069517 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 12 20:54:46.069533 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:54:46.069549 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:54:46.069573 kernel: smpboot: Max logical packages: 1 Nov 12 20:54:46.069589 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Nov 12 20:54:46.069605 kernel: devtmpfs: initialized Nov 12 20:54:46.069623 kernel: x86/mm: Memory block size: 128MB Nov 12 20:54:46.069638 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 12 20:54:46.069653 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:54:46.069668 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:54:46.069684 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:54:46.069699 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:54:46.069714 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:54:46.069728 kernel: audit: type=2000 audit(1731444885.027:1): state=initialized audit_enabled=0 res=1 Nov 12 20:54:46.069744 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:54:46.069761 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:54:46.069775 kernel: cpuidle: using governor menu Nov 12 20:54:46.069788 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:54:46.069802 kernel: dca service started, version 1.12.1 Nov 12 20:54:46.069815 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 12 20:54:46.069829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:54:46.069843 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:54:46.069857 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:54:46.069870 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:54:46.069887 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:54:46.069900 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:54:46.069914 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:54:46.069928 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:54:46.069943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:54:46.069956 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:54:46.069970 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:54:46.069983 kernel: ACPI: Interpreter enabled Nov 12 20:54:46.069997 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:54:46.070016 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:54:46.070031 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:54:46.070046 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:54:46.070060 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 12 20:54:46.070074 kernel: iommu: Default domain type: Translated Nov 12 20:54:46.070089 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:54:46.070103 kernel: efivars: Registered efivars operations Nov 12 20:54:46.070117 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:54:46.070131 kernel: PCI: System does not support PCI Nov 12 20:54:46.070148 kernel: vgaarb: loaded Nov 12 20:54:46.070162 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 12 20:54:46.070176 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:54:46.070191 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:54:46.070205 kernel: pnp: PnP ACPI init Nov 12 20:54:46.070219 kernel: pnp: PnP ACPI: found 3 devices Nov 12 20:54:46.070233 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:54:46.070248 kernel: NET: Registered PF_INET protocol family Nov 12 20:54:46.070262 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:54:46.070279 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:54:46.070294 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:54:46.070309 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:54:46.070348 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:54:46.070363 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:54:46.070377 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.070391 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:54:46.070406 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:54:46.070421 kernel: NET: Registered PF_XDP protocol family Nov 12 20:54:46.070439 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:54:46.070453 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:54:46.070468 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) Nov 12 20:54:46.070482 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:54:46.070497 kernel: Initialise system trusted keyrings Nov 12 20:54:46.070511 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:54:46.070526 kernel: Key type asymmetric registered Nov 12 20:54:46.070540 kernel: Asymmetric key parser 'x509' registered Nov 12 20:54:46.070555 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:54:46.070572 kernel: io scheduler mq-deadline registered Nov 12 20:54:46.070587 kernel: io scheduler kyber registered Nov 12 20:54:46.070602 kernel: io scheduler bfq registered Nov 12 20:54:46.070616 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:54:46.070631 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:54:46.070646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:54:46.070661 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:54:46.070675 kernel: i8042: PNP: No PS/2 controller found. Nov 12 20:54:46.070850 kernel: rtc_cmos 00:02: registered as rtc0 Nov 12 20:54:46.070989 kernel: rtc_cmos 00:02: setting system clock to 2024-11-12T20:54:45 UTC (1731444885) Nov 12 20:54:46.071109 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 12 20:54:46.071129 kernel: intel_pstate: CPU model not supported Nov 12 20:54:46.071144 kernel: efifb: probing for efifb Nov 12 20:54:46.071159 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 12 20:54:46.071174 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 12 20:54:46.071188 kernel: efifb: scrolling: redraw Nov 12 20:54:46.071206 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 20:54:46.071221 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:54:46.071235 kernel: fb0: EFI VGA frame buffer device Nov 12 20:54:46.071250 kernel: pstore: Using crash dump compression: deflate Nov 12 20:54:46.071264 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:54:46.071278 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:54:46.071293 kernel: Segment Routing with IPv6 Nov 12 20:54:46.071307 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:54:46.071362 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:54:46.071377 kernel: Key type dns_resolver registered Nov 12 20:54:46.071396 kernel: IPI shorthand broadcast: enabled Nov 12 20:54:46.071412 kernel: sched_clock: Marking stable (881003200, 45688000)->(1138098500, -211407300) Nov 12 20:54:46.071425 kernel: registered taskstats version 1 Nov 12 20:54:46.071439 kernel: Loading compiled-in X.509 certificates Nov 12 20:54:46.071453 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:54:46.071468 kernel: Key type .fscrypt registered Nov 12 20:54:46.071485 kernel: Key type fscrypt-provisioning registered Nov 12 20:54:46.071500 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:54:46.071520 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:54:46.071535 kernel: ima: No architecture policies found Nov 12 20:54:46.071550 kernel: clk: Disabling unused clocks Nov 12 20:54:46.071566 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:54:46.071583 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:54:46.071600 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:54:46.071614 kernel: Run /init as init process Nov 12 20:54:46.071629 kernel: with arguments: Nov 12 20:54:46.071644 kernel: /init Nov 12 20:54:46.071662 kernel: with environment: Nov 12 20:54:46.071678 kernel: HOME=/ Nov 12 20:54:46.071694 kernel: TERM=linux Nov 12 20:54:46.071711 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:54:46.071730 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:54:46.071748 systemd[1]: Detected virtualization microsoft. Nov 12 20:54:46.071763 systemd[1]: Detected architecture x86-64. Nov 12 20:54:46.071777 systemd[1]: Running in initrd. Nov 12 20:54:46.071795 systemd[1]: No hostname configured, using default hostname. Nov 12 20:54:46.071809 systemd[1]: Hostname set to . Nov 12 20:54:46.071825 systemd[1]: Initializing machine ID from random generator. Nov 12 20:54:46.071838 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:54:46.071854 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:46.071868 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:46.071885 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:54:46.071901 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:54:46.071920 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:54:46.071936 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:54:46.071954 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:54:46.071970 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:54:46.071986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:46.072002 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:46.072018 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:54:46.072036 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:54:46.072052 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:54:46.072068 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:54:46.072084 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:54:46.072100 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:54:46.072117 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:54:46.072132 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:54:46.072148 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:46.072166 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:46.072182 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:46.072198 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:54:46.072214 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:54:46.072230 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:54:46.072246 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:54:46.072262 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:54:46.072277 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:54:46.072293 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:54:46.072355 systemd-journald[176]: Collecting audit messages is disabled. Nov 12 20:54:46.072390 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:46.072406 systemd-journald[176]: Journal started Nov 12 20:54:46.072443 systemd-journald[176]: Runtime Journal (/run/log/journal/d8f39417754c48969195639bc3e97be9) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:54:46.084646 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:54:46.090464 systemd-modules-load[177]: Inserted module 'overlay' Nov 12 20:54:46.093488 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:54:46.100495 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:46.104010 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:54:46.111459 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:46.128519 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:46.135868 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:54:46.139338 kernel: Bridge firewalling registered Nov 12 20:54:46.140394 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 12 20:54:46.143490 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:54:46.156478 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:54:46.165528 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:46.171314 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:46.178137 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:54:46.181503 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:46.196461 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:54:46.207472 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:46.211957 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:54:46.224005 dracut-cmdline[203]: dracut-dracut-053 Nov 12 20:54:46.228343 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:46.234597 dracut-cmdline[203]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:54:46.252234 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:46.263542 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:54:46.303828 systemd-resolved[255]: Positive Trust Anchors: Nov 12 20:54:46.303842 systemd-resolved[255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:54:46.303899 systemd-resolved[255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:54:46.329375 systemd-resolved[255]: Defaulting to hostname 'linux'. Nov 12 20:54:46.330650 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:54:46.335619 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:46.347334 kernel: SCSI subsystem initialized Nov 12 20:54:46.357335 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:54:46.368338 kernel: iscsi: registered transport (tcp) Nov 12 20:54:46.389234 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:54:46.389296 kernel: QLogic iSCSI HBA Driver Nov 12 20:54:46.423956 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:54:46.432524 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:54:46.461794 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:54:46.461875 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:54:46.465003 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:54:46.505347 kernel: raid6: avx512x4 gen() 18253 MB/s Nov 12 20:54:46.524335 kernel: raid6: avx512x2 gen() 18175 MB/s Nov 12 20:54:46.543331 kernel: raid6: avx512x1 gen() 18166 MB/s Nov 12 20:54:46.562329 kernel: raid6: avx2x4 gen() 18083 MB/s Nov 12 20:54:46.581336 kernel: raid6: avx2x2 gen() 18098 MB/s Nov 12 20:54:46.601160 kernel: raid6: avx2x1 gen() 13892 MB/s Nov 12 20:54:46.601196 kernel: raid6: using algorithm avx512x4 gen() 18253 MB/s Nov 12 20:54:46.621982 kernel: raid6: .... xor() 8271 MB/s, rmw enabled Nov 12 20:54:46.622013 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:54:46.644343 kernel: xor: automatically using best checksumming function avx Nov 12 20:54:46.791342 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:54:46.801165 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:54:46.811502 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:46.823048 systemd-udevd[396]: Using default interface naming scheme 'v255'. Nov 12 20:54:46.827438 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:46.837461 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:54:46.854735 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 12 20:54:46.881481 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:54:46.897479 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:54:46.936939 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:46.951486 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:54:46.981299 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:54:46.988570 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:54:46.995840 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:47.002395 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:54:47.013520 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:54:47.027339 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:54:47.045788 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:54:47.056088 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:54:47.056144 kernel: AES CTR mode by8 optimization enabled Nov 12 20:54:47.056943 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:54:47.059949 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:47.067757 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:47.074006 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:47.084512 kernel: hv_vmbus: Vmbus version:5.2 Nov 12 20:54:47.074283 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:47.099314 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 20:54:47.099351 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 20:54:47.079460 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:47.101699 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:47.116398 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:47.116531 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:47.134208 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 12 20:54:47.134237 kernel: PTP clock support registered Nov 12 20:54:47.138198 kernel: hv_utils: Registering HyperV Utility Driver Nov 12 20:54:47.138249 kernel: hv_vmbus: registering driver hv_utils Nov 12 20:54:47.145339 kernel: hv_utils: Heartbeat IC version 3.0 Nov 12 20:54:47.145379 kernel: hv_utils: Shutdown IC version 3.2 Nov 12 20:54:47.151335 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 12 20:54:47.156804 kernel: hv_utils: TimeSync IC version 4.0 Nov 12 20:54:47.153946 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:48.146282 systemd-resolved[255]: Clock change detected. Flushing caches. Nov 12 20:54:48.161281 kernel: hv_vmbus: registering driver hv_storvsc Nov 12 20:54:48.164789 kernel: hv_vmbus: registering driver hv_netvsc Nov 12 20:54:48.169098 kernel: scsi host1: storvsc_host_t Nov 12 20:54:48.169166 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 20:54:48.177923 kernel: scsi host0: storvsc_host_t Nov 12 20:54:48.178237 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 12 20:54:48.183247 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 12 20:54:48.193330 kernel: hv_vmbus: registering driver hid_hyperv Nov 12 20:54:48.194690 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:48.204757 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 12 20:54:48.204795 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 12 20:54:48.214468 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:54:48.226292 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 12 20:54:48.230063 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:54:48.230085 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 12 20:54:48.251327 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 12 20:54:48.278564 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 12 20:54:48.278742 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:54:48.278897 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 12 20:54:48.279054 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 12 20:54:48.280275 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: VF slot 1 added Nov 12 20:54:48.280458 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:48.280485 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:54:48.251292 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:48.295562 kernel: hv_vmbus: registering driver hv_pci Nov 12 20:54:48.295622 kernel: hv_pci 0bae10c5-e4d2-43d0-9694-dd32d28d965d: PCI VMBus probing: Using version 0x10004 Nov 12 20:54:48.340596 kernel: hv_pci 0bae10c5-e4d2-43d0-9694-dd32d28d965d: PCI host bridge to bus e4d2:00 Nov 12 20:54:48.341024 kernel: pci_bus e4d2:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 12 20:54:48.341303 kernel: pci_bus e4d2:00: No busn resource found for root bus, will use [bus 00-ff] Nov 12 20:54:48.341536 kernel: pci e4d2:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 12 20:54:48.341726 kernel: pci e4d2:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:54:48.341895 kernel: pci e4d2:00:02.0: enabling Extended Tags Nov 12 20:54:48.342121 kernel: pci e4d2:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e4d2:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 12 20:54:48.342369 kernel: pci_bus e4d2:00: busn_res: [bus 00-ff] end is updated to 00 Nov 12 20:54:48.342546 kernel: pci e4d2:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:54:48.430210 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (448) Nov 12 20:54:48.444207 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (451) Nov 12 20:54:48.480631 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 12 20:54:48.499440 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:54:48.518254 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 12 20:54:48.525026 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 12 20:54:48.547776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 12 20:54:48.585443 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:54:48.598679 kernel: mlx5_core e4d2:00:02.0: enabling device (0000 -> 0002) Nov 12 20:54:48.834358 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:48.834386 kernel: mlx5_core e4d2:00:02.0: firmware version: 14.30.1284 Nov 12 20:54:48.834576 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:48.834595 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: VF registering: eth1 Nov 12 20:54:48.834741 kernel: mlx5_core e4d2:00:02.0 eth1: joined to eth0 Nov 12 20:54:48.834920 kernel: mlx5_core e4d2:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 20:54:48.844210 kernel: mlx5_core e4d2:00:02.0 enP58578s1: renamed from eth1 Nov 12 20:54:49.619209 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:54:49.619799 disk-uuid[597]: The operation has completed successfully. Nov 12 20:54:49.713722 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:54:49.713834 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:54:49.735375 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:54:49.741490 sh[716]: Success Nov 12 20:54:49.758374 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:54:49.823883 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:54:49.833301 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:54:49.838106 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:54:49.853210 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:54:49.853250 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:49.858381 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:54:49.861103 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:54:49.863584 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:54:49.925270 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:54:49.926163 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:54:49.937385 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:54:49.943353 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:54:49.960281 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:49.960348 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:49.963253 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:54:49.972251 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:54:49.983377 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:54:49.988238 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:49.993454 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:54:50.004360 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:54:50.035075 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:54:50.047356 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:54:50.073106 systemd-networkd[900]: lo: Link UP Nov 12 20:54:50.073116 systemd-networkd[900]: lo: Gained carrier Nov 12 20:54:50.078934 systemd-networkd[900]: Enumeration completed Nov 12 20:54:50.079045 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:54:50.082162 systemd[1]: Reached target network.target - Network. Nov 12 20:54:50.091465 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:50.091470 systemd-networkd[900]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:50.153300 kernel: mlx5_core e4d2:00:02.0 enP58578s1: Link up Nov 12 20:54:50.187474 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: Data path switched to VF: enP58578s1 Nov 12 20:54:50.189570 systemd-networkd[900]: enP58578s1: Link UP Nov 12 20:54:50.189709 systemd-networkd[900]: eth0: Link UP Nov 12 20:54:50.189868 systemd-networkd[900]: eth0: Gained carrier Nov 12 20:54:50.189881 systemd-networkd[900]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:50.195889 systemd-networkd[900]: enP58578s1: Gained carrier Nov 12 20:54:50.251257 systemd-networkd[900]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:54:50.264535 ignition[851]: Ignition 2.19.0 Nov 12 20:54:50.264546 ignition[851]: Stage: fetch-offline Nov 12 20:54:50.267965 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:54:50.264585 ignition[851]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.264596 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.264717 ignition[851]: parsed url from cmdline: "" Nov 12 20:54:50.264722 ignition[851]: no config URL provided Nov 12 20:54:50.264730 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:54:50.282352 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:54:50.264741 ignition[851]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:54:50.264748 ignition[851]: failed to fetch config: resource requires networking Nov 12 20:54:50.264959 ignition[851]: Ignition finished successfully Nov 12 20:54:50.306429 ignition[909]: Ignition 2.19.0 Nov 12 20:54:50.306440 ignition[909]: Stage: fetch Nov 12 20:54:50.306663 ignition[909]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.306674 ignition[909]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.306763 ignition[909]: parsed url from cmdline: "" Nov 12 20:54:50.306767 ignition[909]: no config URL provided Nov 12 20:54:50.306771 ignition[909]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:54:50.306780 ignition[909]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:54:50.306801 ignition[909]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 12 20:54:50.394849 ignition[909]: GET result: OK Nov 12 20:54:50.394952 ignition[909]: config has been read from IMDS userdata Nov 12 20:54:50.394985 ignition[909]: parsing config with SHA512: 5258f6c496d30311d6f2469f830c3ca62deed3bdc800d814e80f7c603dcc370a4d8c5bf2a6ecc34fe885677d1b1db100ae0e9f9142e9bb5a04eba876e267acfd Nov 12 20:54:50.399263 unknown[909]: fetched base config from "system" Nov 12 20:54:50.399272 unknown[909]: fetched base config from "system" Nov 12 20:54:50.399700 ignition[909]: fetch: fetch complete Nov 12 20:54:50.399279 unknown[909]: fetched user config from "azure" Nov 12 20:54:50.399704 ignition[909]: fetch: fetch passed Nov 12 20:54:50.401442 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:54:50.399748 ignition[909]: Ignition finished successfully Nov 12 20:54:50.415333 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:54:50.435924 ignition[915]: Ignition 2.19.0 Nov 12 20:54:50.435934 ignition[915]: Stage: kargs Nov 12 20:54:50.438993 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:54:50.436166 ignition[915]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.436181 ignition[915]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.437070 ignition[915]: kargs: kargs passed Nov 12 20:54:50.437114 ignition[915]: Ignition finished successfully Nov 12 20:54:50.454770 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:54:50.470489 ignition[921]: Ignition 2.19.0 Nov 12 20:54:50.470499 ignition[921]: Stage: disks Nov 12 20:54:50.472508 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:54:50.470725 ignition[921]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:50.470738 ignition[921]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:50.471583 ignition[921]: disks: disks passed Nov 12 20:54:50.471624 ignition[921]: Ignition finished successfully Nov 12 20:54:50.487138 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:54:50.492732 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:54:50.498564 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:54:50.501091 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:54:50.505943 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:54:50.518352 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:54:50.540943 systemd-fsck[929]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 12 20:54:50.546743 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:54:50.558303 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:54:50.645207 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:54:50.646160 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:54:50.650861 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:54:50.667285 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:54:50.678693 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (940) Nov 12 20:54:50.672827 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:54:50.683333 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:54:50.693257 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:50.693287 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:50.693300 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:54:50.694888 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:54:50.695024 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:54:50.710217 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:54:50.713650 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:54:50.718376 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:54:50.728348 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:54:50.897739 coreos-metadata[942]: Nov 12 20:54:50.897 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:54:50.902125 coreos-metadata[942]: Nov 12 20:54:50.899 INFO Fetch successful Nov 12 20:54:50.902125 coreos-metadata[942]: Nov 12 20:54:50.899 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:54:50.909692 coreos-metadata[942]: Nov 12 20:54:50.909 INFO Fetch successful Nov 12 20:54:50.913452 coreos-metadata[942]: Nov 12 20:54:50.913 INFO wrote hostname ci-4081.2.0-a-d8aa37ea01 to /sysroot/etc/hostname Nov 12 20:54:50.919573 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:54:50.926477 initrd-setup-root[970]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:54:50.937386 initrd-setup-root[977]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:54:50.948084 initrd-setup-root[984]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:54:50.953523 initrd-setup-root[991]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:54:51.223809 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:54:51.234275 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:54:51.243334 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:54:51.250217 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:51.251397 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:54:51.279236 ignition[1062]: INFO : Ignition 2.19.0 Nov 12 20:54:51.279236 ignition[1062]: INFO : Stage: mount Nov 12 20:54:51.279236 ignition[1062]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:51.279236 ignition[1062]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:51.294412 ignition[1062]: INFO : mount: mount passed Nov 12 20:54:51.294412 ignition[1062]: INFO : Ignition finished successfully Nov 12 20:54:51.279703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:54:51.288987 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:54:51.305568 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:54:51.314615 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:54:51.330206 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1074) Nov 12 20:54:51.330244 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:54:51.334215 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:54:51.338466 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:54:51.345201 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:54:51.347368 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:54:51.372500 ignition[1091]: INFO : Ignition 2.19.0 Nov 12 20:54:51.372500 ignition[1091]: INFO : Stage: files Nov 12 20:54:51.376659 ignition[1091]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:51.376659 ignition[1091]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:51.376659 ignition[1091]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:54:51.386357 ignition[1091]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:54:51.386357 ignition[1091]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:54:51.415985 ignition[1091]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:54:51.420810 ignition[1091]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:54:51.420810 ignition[1091]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:54:51.416497 unknown[1091]: wrote ssh authorized keys file for user: core Nov 12 20:54:51.430851 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:54:51.430851 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:54:51.465694 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:54:51.664367 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:54:51.664367 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:51.676024 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Nov 12 20:54:51.670483 systemd-networkd[900]: eth0: Gained IPv6LL Nov 12 20:54:52.026756 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:54:52.179396 systemd-networkd[900]: enP58578s1: Gained IPv6LL Nov 12 20:54:52.652257 ignition[1091]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Nov 12 20:54:52.652257 ignition[1091]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:54:52.661716 ignition[1091]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:54:52.667002 ignition[1091]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:54:52.667002 ignition[1091]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:54:52.667002 ignition[1091]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:54:52.680015 ignition[1091]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:54:52.683981 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:54:52.688808 ignition[1091]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:54:52.693309 ignition[1091]: INFO : files: files passed Nov 12 20:54:52.695249 ignition[1091]: INFO : Ignition finished successfully Nov 12 20:54:52.699201 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:54:52.710357 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:54:52.716281 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:54:52.727169 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:54:52.727531 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:54:52.742212 initrd-setup-root-after-ignition[1120]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:52.742212 initrd-setup-root-after-ignition[1120]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:52.750596 initrd-setup-root-after-ignition[1124]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:54:52.751552 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:54:52.762051 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:54:52.772394 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:54:52.796401 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:54:52.796526 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:54:52.802274 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:54:52.808041 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:54:52.810735 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:54:52.821340 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:54:52.835133 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:54:52.845350 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:54:52.854734 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:52.854987 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:52.855886 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:54:52.856337 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:54:52.856471 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:54:52.857204 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:54:52.857666 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:54:52.858171 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:54:52.858593 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:54:52.858991 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:54:52.859494 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:54:52.859913 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:54:52.860363 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:54:52.860770 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:54:52.861273 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:54:52.861661 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:54:52.861789 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:54:52.862550 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:52.863004 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:52.863389 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:54:52.900034 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:52.906040 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:54:52.911026 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:54:52.926958 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:54:52.929990 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:54:52.936810 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:54:52.936920 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:54:52.948730 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:54:52.953018 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:54:52.995380 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:54:52.997911 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:54:52.998074 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:53.015366 ignition[1144]: INFO : Ignition 2.19.0 Nov 12 20:54:53.015366 ignition[1144]: INFO : Stage: umount Nov 12 20:54:53.032591 ignition[1144]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:54:53.032591 ignition[1144]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:54:53.032591 ignition[1144]: INFO : umount: umount passed Nov 12 20:54:53.032591 ignition[1144]: INFO : Ignition finished successfully Nov 12 20:54:53.016398 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:54:53.018601 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:54:53.018763 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:53.021916 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:54:53.022048 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:54:53.029782 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:54:53.029881 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:54:53.033708 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:54:53.033796 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:54:53.040148 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:54:53.040559 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:54:53.044163 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:54:53.044280 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:54:53.048817 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:54:53.048865 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:54:53.053502 systemd[1]: Stopped target network.target - Network. Nov 12 20:54:53.058003 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:54:53.058057 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:54:53.058158 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:54:53.058984 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:54:53.065380 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:53.116268 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:54:53.120589 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:54:53.127947 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:54:53.128011 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:54:53.132639 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:54:53.132690 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:54:53.137917 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:54:53.137981 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:54:53.142284 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:54:53.142340 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:54:53.147238 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:54:53.152539 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:54:53.161237 systemd-networkd[900]: eth0: DHCPv6 lease lost Nov 12 20:54:53.162572 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:54:53.164282 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:54:53.164385 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:54:53.170160 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:54:53.170252 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:53.185136 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:54:53.191481 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:54:53.191544 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:54:53.205199 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:53.208817 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:54:53.208936 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:54:53.214629 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:54:53.214680 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:53.227926 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:54:53.227989 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:53.238654 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:54:53.238716 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:53.247958 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:54:53.248158 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:53.261441 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:54:53.261541 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:53.264520 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:54:53.264568 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:53.269525 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:54:53.269580 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:54:53.278343 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:54:53.278395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:54:53.282434 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:54:53.282484 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:54:53.306415 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:54:53.324523 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: Data path switched from VF: enP58578s1 Nov 12 20:54:53.314925 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:54:53.314993 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:53.315679 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:53.315717 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:53.320432 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:54:53.320518 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:54:53.341262 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:54:53.341373 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:54:53.528125 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:54:53.528295 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:54:53.533536 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:54:53.539068 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:54:53.539137 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:54:53.555365 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:54:53.564406 systemd[1]: Switching root. Nov 12 20:54:53.603046 systemd-journald[176]: Journal stopped Nov 12 20:54:55.900980 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 12 20:54:55.901015 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:54:55.901029 kernel: SELinux: policy capability open_perms=1 Nov 12 20:54:55.901039 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:54:55.901049 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:54:55.901061 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:54:55.901074 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:54:55.901089 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:54:55.901101 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:54:55.901112 kernel: audit: type=1403 audit(1731444894.375:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:54:55.901127 systemd[1]: Successfully loaded SELinux policy in 78.023ms. Nov 12 20:54:55.901139 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.518ms. Nov 12 20:54:55.901154 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:54:55.901169 systemd[1]: Detected virtualization microsoft. Nov 12 20:54:55.901195 systemd[1]: Detected architecture x86-64. Nov 12 20:54:55.901211 systemd[1]: Detected first boot. Nov 12 20:54:55.901223 systemd[1]: Hostname set to . Nov 12 20:54:55.901239 systemd[1]: Initializing machine ID from random generator. Nov 12 20:54:55.901254 zram_generator::config[1186]: No configuration found. Nov 12 20:54:55.901271 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:54:55.901282 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:54:55.901297 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:54:55.901309 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:54:55.901324 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:54:55.901339 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:54:55.901351 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:54:55.901365 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:54:55.901381 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:54:55.901395 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:54:55.901408 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:54:55.901423 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:54:55.901437 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:54:55.901450 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:54:55.901465 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:54:55.901482 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:54:55.901495 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:54:55.901510 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:54:55.901523 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:54:55.901538 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:54:55.901550 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:54:55.901569 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:54:55.901585 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:54:55.901600 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:54:55.901612 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:54:55.901627 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:54:55.901643 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:54:55.901659 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:54:55.901671 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:54:55.901683 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:54:55.901701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:54:55.901716 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:54:55.901730 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:54:55.901745 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:54:55.901757 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:54:55.901776 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:54:55.901788 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:54:55.901804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:55.901820 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:54:55.901833 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:54:55.901849 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:54:55.901864 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:54:55.901878 systemd[1]: Reached target machines.target - Containers. Nov 12 20:54:55.901897 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:54:55.901918 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:55.901931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:54:55.901947 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:54:55.901961 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:55.901975 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:54:55.901988 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:55.902003 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:54:55.902019 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:55.902035 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:54:55.902050 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:54:55.902066 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:54:55.902079 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:54:55.902095 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:54:55.902109 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:54:55.902125 kernel: loop: module loaded Nov 12 20:54:55.902140 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:54:55.902158 kernel: fuse: init (API version 7.39) Nov 12 20:54:55.902170 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:54:55.902192 kernel: ACPI: bus type drm_connector registered Nov 12 20:54:55.902210 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:54:55.902222 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:54:55.902234 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:54:55.902250 systemd[1]: Stopped verity-setup.service. Nov 12 20:54:55.902285 systemd-journald[1278]: Collecting audit messages is disabled. Nov 12 20:54:55.902322 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:55.902340 systemd-journald[1278]: Journal started Nov 12 20:54:55.902369 systemd-journald[1278]: Runtime Journal (/run/log/journal/b60e2c9668094a1592519f8175084d16) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:54:55.332524 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:54:55.361127 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 20:54:55.361500 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:54:55.908371 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:54:55.915229 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:54:55.918312 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:54:55.921131 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:54:55.924028 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:54:55.927051 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:54:55.930280 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:54:55.933323 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:54:55.936839 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:54:55.940757 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:54:55.941052 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:54:55.944600 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:55.944893 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:55.948427 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:54:55.948710 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:54:55.953433 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:55.953646 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:55.957182 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:54:55.957567 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:54:55.960966 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:55.961268 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:55.965098 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:54:55.968609 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:54:55.972171 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:54:55.977919 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:54:55.989457 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:54:55.997303 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:54:56.001377 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:54:56.004411 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:54:56.004544 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:54:56.008454 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:54:56.012733 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:54:56.017773 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:54:56.020435 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:56.032998 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:54:56.041318 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:54:56.044760 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:54:56.050593 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:54:56.053351 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:54:56.057400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:54:56.062414 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:54:56.076010 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:54:56.087616 systemd-journald[1278]: Time spent on flushing to /var/log/journal/b60e2c9668094a1592519f8175084d16 is 42.248ms for 957 entries. Nov 12 20:54:56.087616 systemd-journald[1278]: System Journal (/var/log/journal/b60e2c9668094a1592519f8175084d16) is 8.0M, max 2.6G, 2.6G free. Nov 12 20:54:56.234711 systemd-journald[1278]: Received client request to flush runtime journal. Nov 12 20:54:56.234792 kernel: loop0: detected capacity change from 0 to 31056 Nov 12 20:54:56.087388 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:54:56.104899 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:54:56.111660 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:54:56.115447 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:54:56.120949 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:54:56.141377 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:54:56.149481 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:54:56.155250 udevadm[1325]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Nov 12 20:54:56.244180 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:54:56.248878 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:54:56.276236 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:54:56.276866 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:54:56.312363 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:54:56.325989 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:54:56.340247 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:54:56.359679 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Nov 12 20:54:56.359702 systemd-tmpfiles[1341]: ACLs are not supported, ignoring. Nov 12 20:54:56.365947 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:54:56.380246 kernel: loop1: detected capacity change from 0 to 142488 Nov 12 20:54:56.548212 kernel: loop2: detected capacity change from 0 to 211296 Nov 12 20:54:56.585435 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:54:56.703427 kernel: loop4: detected capacity change from 0 to 31056 Nov 12 20:54:56.711584 kernel: loop5: detected capacity change from 0 to 142488 Nov 12 20:54:56.788612 kernel: loop6: detected capacity change from 0 to 211296 Nov 12 20:54:56.797211 kernel: loop7: detected capacity change from 0 to 140768 Nov 12 20:54:56.812099 (sd-merge)[1347]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 12 20:54:56.812669 (sd-merge)[1347]: Merged extensions into '/usr'. Nov 12 20:54:56.816693 systemd[1]: Reloading requested from client PID 1323 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:54:56.816711 systemd[1]: Reloading... Nov 12 20:54:56.917490 zram_generator::config[1375]: No configuration found. Nov 12 20:54:57.104352 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:57.195785 systemd[1]: Reloading finished in 378 ms. Nov 12 20:54:57.222296 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:54:57.237326 systemd[1]: Starting ensure-sysext.service... Nov 12 20:54:57.240397 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:54:57.271549 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:54:57.272026 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:54:57.272883 systemd-tmpfiles[1432]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:54:57.273113 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Nov 12 20:54:57.273161 systemd-tmpfiles[1432]: ACLs are not supported, ignoring. Nov 12 20:54:57.276313 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:54:57.276324 systemd-tmpfiles[1432]: Skipping /boot Nov 12 20:54:57.288072 systemd-tmpfiles[1432]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:54:57.288085 systemd-tmpfiles[1432]: Skipping /boot Nov 12 20:54:57.298337 systemd[1]: Reloading requested from client PID 1431 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:54:57.298357 systemd[1]: Reloading... Nov 12 20:54:57.387248 zram_generator::config[1458]: No configuration found. Nov 12 20:54:57.531825 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:57.600468 systemd[1]: Reloading finished in 301 ms. Nov 12 20:54:57.615906 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:54:57.623876 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:54:57.639346 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:57.677506 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:54:57.683203 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:54:57.699352 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:54:57.704380 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:54:57.716392 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:54:57.724814 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:54:57.731615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:57.731878 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:57.738477 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:54:57.746376 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:54:57.754475 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:54:57.761575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:57.761738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:57.766531 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:57.767142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:57.767601 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:57.767738 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:57.775748 systemd-udevd[1527]: Using default interface naming scheme 'v255'. Nov 12 20:54:57.777968 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Nov 12 20:54:57.780804 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:57.781163 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:54:57.788302 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:54:57.796340 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:54:57.798387 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:54:57.807824 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:54:57.809349 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:54:57.809546 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:54:57.813959 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:54:57.814422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:54:57.819798 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:54:57.820274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:54:57.824154 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:54:57.824373 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:54:57.831182 systemd[1]: Finished ensure-sysext.service. Nov 12 20:54:57.836764 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:54:57.837026 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:54:57.838265 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:54:57.879597 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:54:57.898215 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:54:57.939923 augenrules[1559]: No rules Nov 12 20:54:57.940165 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:57.946324 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:54:57.967383 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:54:58.011861 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:54:58.018782 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:54:58.027177 systemd-resolved[1526]: Positive Trust Anchors: Nov 12 20:54:58.027534 systemd-resolved[1526]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:54:58.027596 systemd-resolved[1526]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:54:58.057361 systemd-resolved[1526]: Using system hostname 'ci-4081.2.0-a-d8aa37ea01'. Nov 12 20:54:58.065367 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:54:58.080209 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1572) Nov 12 20:54:58.080558 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:54:58.086205 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1572) Nov 12 20:54:58.141332 systemd-networkd[1569]: lo: Link UP Nov 12 20:54:58.141348 systemd-networkd[1569]: lo: Gained carrier Nov 12 20:54:58.148953 systemd-networkd[1569]: Enumeration completed Nov 12 20:54:58.149087 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:54:58.154736 systemd[1]: Reached target network.target - Network. Nov 12 20:54:58.162350 systemd-networkd[1569]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:58.162360 systemd-networkd[1569]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:54:58.165370 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:54:58.177908 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:54:58.229428 kernel: mlx5_core e4d2:00:02.0 enP58578s1: Link up Nov 12 20:54:58.248214 kernel: hv_vmbus: registering driver hv_balloon Nov 12 20:54:58.256313 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 12 20:54:58.256397 kernel: hv_netvsc 000d3ab5-83cd-000d-3ab5-83cd000d3ab5 eth0: Data path switched to VF: enP58578s1 Nov 12 20:54:58.256602 kernel: hv_vmbus: registering driver hyperv_fb Nov 12 20:54:58.256624 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:54:58.262363 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 12 20:54:58.272208 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 12 20:54:58.272280 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:54:58.277289 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:54:58.278818 systemd-networkd[1569]: enP58578s1: Link UP Nov 12 20:54:58.279407 systemd-networkd[1569]: eth0: Link UP Nov 12 20:54:58.279413 systemd-networkd[1569]: eth0: Gained carrier Nov 12 20:54:58.279462 systemd-networkd[1569]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:58.283675 systemd-networkd[1569]: enP58578s1: Gained carrier Nov 12 20:54:58.285471 systemd-networkd[1569]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:54:58.308921 ldconfig[1318]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:54:58.320226 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Nov 12 20:54:58.320319 systemd-networkd[1569]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:54:58.414131 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:54:58.427225 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1566) Nov 12 20:54:58.431464 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:54:58.457285 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:54:58.569458 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:58.602809 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:54:58.614700 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:54:58.623007 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:54:58.623631 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:58.640483 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:54:58.646756 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:54:58.674355 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 12 20:54:58.704622 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:54:58.712474 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:54:58.742495 lvm[1656]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:54:58.767990 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:54:58.772067 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:54:58.780432 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:54:58.787693 lvm[1659]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:54:58.829429 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:54:58.833117 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:54:58.836127 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:54:58.839681 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:54:58.843451 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:54:58.846629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:54:58.850046 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:54:58.853697 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:54:58.853740 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:54:58.856064 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:54:58.859389 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:54:58.863685 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:54:58.878107 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:54:58.881868 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:54:58.885402 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:54:58.888860 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:54:58.891629 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:54:58.894219 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:54:58.894256 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:54:58.901283 systemd[1]: Starting chronyd.service - NTP client/server... Nov 12 20:54:58.907305 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:54:58.925400 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:54:58.931424 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:54:58.936311 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:54:58.948383 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:54:58.951230 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:54:58.951297 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 12 20:54:58.955402 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 12 20:54:58.960422 (chronyd)[1665]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 12 20:54:58.961798 jq[1671]: false Nov 12 20:54:58.963456 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 12 20:54:58.972456 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:54:58.974262 chronyd[1677]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 12 20:54:58.984358 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:54:58.985769 extend-filesystems[1672]: Found loop4 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found loop5 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found loop6 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found loop7 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda1 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda2 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda3 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found usr Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda4 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda6 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda7 Nov 12 20:54:58.991776 extend-filesystems[1672]: Found sda9 Nov 12 20:54:58.991776 extend-filesystems[1672]: Checking size of /dev/sda9 Nov 12 20:54:59.020898 kernel: hv_utils: KVP IC version 4.0 Nov 12 20:54:58.990828 KVP[1673]: KVP starting; pid is:1673 Nov 12 20:54:59.007710 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:54:58.996835 chronyd[1677]: Timezone right/UTC failed leap second check, ignoring Nov 12 20:54:59.020803 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:54:58.997014 chronyd[1677]: Loaded seccomp filter (level 2) Nov 12 20:54:59.005749 KVP[1673]: KVP LIC Version: 3.1 Nov 12 20:54:59.034181 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:54:59.038116 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:54:59.038837 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:54:59.040629 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:54:59.043951 extend-filesystems[1672]: Old size kept for /dev/sda9 Nov 12 20:54:59.047529 extend-filesystems[1672]: Found sr0 Nov 12 20:54:59.045568 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:54:59.052199 systemd[1]: Started chronyd.service - NTP client/server. Nov 12 20:54:59.063652 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:54:59.063872 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:54:59.064229 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:54:59.064417 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:54:59.067947 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:54:59.068158 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:54:59.072656 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:54:59.072869 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:54:59.085549 dbus-daemon[1668]: [system] SELinux support is enabled Nov 12 20:54:59.089063 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:54:59.105505 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:54:59.105569 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:54:59.111616 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:54:59.111645 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:54:59.121132 jq[1693]: true Nov 12 20:54:59.147287 (ntainerd)[1712]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:54:59.167121 tar[1697]: linux-amd64/helm Nov 12 20:54:59.173212 jq[1713]: true Nov 12 20:54:59.190221 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1578) Nov 12 20:54:59.208351 update_engine[1692]: I20241112 20:54:59.207154 1692 main.cc:92] Flatcar Update Engine starting Nov 12 20:54:59.225311 update_engine[1692]: I20241112 20:54:59.225040 1692 update_check_scheduler.cc:74] Next update check in 2m2s Nov 12 20:54:59.228728 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:54:59.264377 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:54:59.282086 systemd-logind[1689]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:54:59.283311 coreos-metadata[1667]: Nov 12 20:54:59.282 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:54:59.287181 coreos-metadata[1667]: Nov 12 20:54:59.287 INFO Fetch successful Nov 12 20:54:59.287280 coreos-metadata[1667]: Nov 12 20:54:59.287 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 12 20:54:59.287537 systemd-logind[1689]: New seat seat0. Nov 12 20:54:59.288412 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:54:59.295670 coreos-metadata[1667]: Nov 12 20:54:59.295 INFO Fetch successful Nov 12 20:54:59.297677 coreos-metadata[1667]: Nov 12 20:54:59.297 INFO Fetching http://168.63.129.16/machine/62cc17e8-0e71-44c5-b591-f65c8ab49497/3be82fb0%2D1d80%2D4592%2D9c48%2D382db569e78e.%5Fci%2D4081.2.0%2Da%2Dd8aa37ea01?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 12 20:54:59.303866 coreos-metadata[1667]: Nov 12 20:54:59.300 INFO Fetch successful Nov 12 20:54:59.303866 coreos-metadata[1667]: Nov 12 20:54:59.300 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:54:59.311245 coreos-metadata[1667]: Nov 12 20:54:59.311 INFO Fetch successful Nov 12 20:54:59.319468 bash[1756]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:54:59.339544 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:54:59.350351 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:54:59.417249 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:54:59.421899 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:54:59.551172 locksmithd[1755]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:54:59.752050 sshd_keygen[1714]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:54:59.805612 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:54:59.818457 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:54:59.836760 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:54:59.838318 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:54:59.854320 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:54:59.868535 containerd[1712]: time="2024-11-12T20:54:59.868450400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:54:59.886545 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:54:59.896525 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:54:59.905531 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:54:59.910665 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:54:59.938265 containerd[1712]: time="2024-11-12T20:54:59.938181600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940172 containerd[1712]: time="2024-11-12T20:54:59.940118100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940172 containerd[1712]: time="2024-11-12T20:54:59.940170600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:54:59.940324 containerd[1712]: time="2024-11-12T20:54:59.940205600Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:54:59.940416 containerd[1712]: time="2024-11-12T20:54:59.940390300Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:54:59.940461 containerd[1712]: time="2024-11-12T20:54:59.940423500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940529 containerd[1712]: time="2024-11-12T20:54:59.940508000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940567 containerd[1712]: time="2024-11-12T20:54:59.940534000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940792 containerd[1712]: time="2024-11-12T20:54:59.940762800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940852 containerd[1712]: time="2024-11-12T20:54:59.940792800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940852 containerd[1712]: time="2024-11-12T20:54:59.940813500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940852 containerd[1712]: time="2024-11-12T20:54:59.940827700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.940960 containerd[1712]: time="2024-11-12T20:54:59.940932400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.941233 containerd[1712]: time="2024-11-12T20:54:59.941179600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:54:59.941409 containerd[1712]: time="2024-11-12T20:54:59.941383200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:54:59.941462 containerd[1712]: time="2024-11-12T20:54:59.941411600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:54:59.941537 containerd[1712]: time="2024-11-12T20:54:59.941517700Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:54:59.941599 containerd[1712]: time="2024-11-12T20:54:59.941582000Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:54:59.953687 containerd[1712]: time="2024-11-12T20:54:59.953651200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:54:59.953771 containerd[1712]: time="2024-11-12T20:54:59.953720500Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:54:59.953771 containerd[1712]: time="2024-11-12T20:54:59.953744900Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:54:59.953862 containerd[1712]: time="2024-11-12T20:54:59.953810100Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:54:59.953862 containerd[1712]: time="2024-11-12T20:54:59.953842300Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:54:59.954018 containerd[1712]: time="2024-11-12T20:54:59.953995900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:54:59.954530 containerd[1712]: time="2024-11-12T20:54:59.954500000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:54:59.954669 containerd[1712]: time="2024-11-12T20:54:59.954646400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:54:59.954726 containerd[1712]: time="2024-11-12T20:54:59.954675100Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:54:59.954726 containerd[1712]: time="2024-11-12T20:54:59.954693400Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:54:59.954726 containerd[1712]: time="2024-11-12T20:54:59.954715300Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.954837 containerd[1712]: time="2024-11-12T20:54:59.954733700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.954837 containerd[1712]: time="2024-11-12T20:54:59.954752100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.954837 containerd[1712]: time="2024-11-12T20:54:59.954772100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.954837 containerd[1712]: time="2024-11-12T20:54:59.954791600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.954837 containerd[1712]: time="2024-11-12T20:54:59.954810000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.954837 containerd[1712]: time="2024-11-12T20:54:59.954829300Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954846000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954889300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954911000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954928900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954949100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954966300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.954993400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.955012200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955034 containerd[1712]: time="2024-11-12T20:54:59.955031100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955049100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955069100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955085500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955107800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955126200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955148100Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955176000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955268800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955351 containerd[1712]: time="2024-11-12T20:54:59.955287500Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955365500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955396800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955475800Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955494500Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955509400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955526200Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955539500Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:54:59.955637 containerd[1712]: time="2024-11-12T20:54:59.955554300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:54:59.957219 containerd[1712]: time="2024-11-12T20:54:59.955935300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:54:59.957219 containerd[1712]: time="2024-11-12T20:54:59.956033700Z" level=info msg="Connect containerd service" Nov 12 20:54:59.957219 containerd[1712]: time="2024-11-12T20:54:59.956078300Z" level=info msg="using legacy CRI server" Nov 12 20:54:59.957219 containerd[1712]: time="2024-11-12T20:54:59.956087400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:54:59.957219 containerd[1712]: time="2024-11-12T20:54:59.956285800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:54:59.957219 containerd[1712]: time="2024-11-12T20:54:59.957153900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:54:59.957857 containerd[1712]: time="2024-11-12T20:54:59.957363600Z" level=info msg="Start subscribing containerd event" Nov 12 20:54:59.958577 containerd[1712]: time="2024-11-12T20:54:59.958548600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:54:59.958651 containerd[1712]: time="2024-11-12T20:54:59.958627900Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:54:59.958702 containerd[1712]: time="2024-11-12T20:54:59.958676900Z" level=info msg="Start recovering state" Nov 12 20:54:59.959674 containerd[1712]: time="2024-11-12T20:54:59.958777100Z" level=info msg="Start event monitor" Nov 12 20:54:59.959674 containerd[1712]: time="2024-11-12T20:54:59.958793900Z" level=info msg="Start snapshots syncer" Nov 12 20:54:59.960416 containerd[1712]: time="2024-11-12T20:54:59.958809000Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:54:59.960416 containerd[1712]: time="2024-11-12T20:54:59.959825200Z" level=info msg="Start streaming server" Nov 12 20:54:59.960416 containerd[1712]: time="2024-11-12T20:54:59.960271700Z" level=info msg="containerd successfully booted in 0.093752s" Nov 12 20:54:59.960366 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:54:59.972594 tar[1697]: linux-amd64/LICENSE Nov 12 20:54:59.972699 tar[1697]: linux-amd64/README.md Nov 12 20:54:59.983366 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:55:00.243461 systemd-networkd[1569]: eth0: Gained IPv6LL Nov 12 20:55:00.244991 systemd-networkd[1569]: enP58578s1: Gained IPv6LL Nov 12 20:55:00.247574 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:55:00.251904 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:55:00.259434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:00.263977 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:55:00.283403 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 12 20:55:00.318087 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 12 20:55:00.361835 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:55:01.300222 waagent[1807]: 2024-11-12T20:55:01.299915Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 12 20:55:01.306007 waagent[1807]: 2024-11-12T20:55:01.303471Z INFO Daemon Daemon OS: flatcar 4081.2.0 Nov 12 20:55:01.306114 waagent[1807]: 2024-11-12T20:55:01.305997Z INFO Daemon Daemon Python: 3.11.9 Nov 12 20:55:01.308421 waagent[1807]: 2024-11-12T20:55:01.308364Z INFO Daemon Daemon Run daemon Nov 12 20:55:01.311211 waagent[1807]: 2024-11-12T20:55:01.310416Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.0' Nov 12 20:55:01.314983 waagent[1807]: 2024-11-12T20:55:01.314786Z INFO Daemon Daemon Using waagent for provisioning Nov 12 20:55:01.317917 waagent[1807]: 2024-11-12T20:55:01.317863Z INFO Daemon Daemon Activate resource disk Nov 12 20:55:01.320310 waagent[1807]: 2024-11-12T20:55:01.320259Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 12 20:55:01.329216 waagent[1807]: 2024-11-12T20:55:01.328540Z INFO Daemon Daemon Found device: None Nov 12 20:55:01.331390 waagent[1807]: 2024-11-12T20:55:01.331163Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 12 20:55:01.335368 waagent[1807]: 2024-11-12T20:55:01.335137Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 12 20:55:01.342199 waagent[1807]: 2024-11-12T20:55:01.341767Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 12 20:55:01.345023 waagent[1807]: 2024-11-12T20:55:01.344787Z INFO Daemon Daemon Running default provisioning handler Nov 12 20:55:01.354837 waagent[1807]: 2024-11-12T20:55:01.354635Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 12 20:55:01.361866 waagent[1807]: 2024-11-12T20:55:01.361815Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 12 20:55:01.367010 waagent[1807]: 2024-11-12T20:55:01.366631Z INFO Daemon Daemon cloud-init is enabled: False Nov 12 20:55:01.369759 waagent[1807]: 2024-11-12T20:55:01.369221Z INFO Daemon Daemon Copying ovf-env.xml Nov 12 20:55:01.418205 waagent[1807]: 2024-11-12T20:55:01.416449Z INFO Daemon Daemon Successfully mounted dvd Nov 12 20:55:01.427392 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:01.432381 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:55:01.433774 (kubelet)[1824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:01.436250 systemd[1]: Startup finished in 465ms (firmware) + 7.889s (loader) + 1.019s (kernel) + 7.615s (initrd) + 7.137s (userspace) = 24.127s. Nov 12 20:55:01.505076 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 12 20:55:01.525253 waagent[1807]: 2024-11-12T20:55:01.524989Z INFO Daemon Daemon Detect protocol endpoint Nov 12 20:55:01.528128 waagent[1807]: 2024-11-12T20:55:01.527922Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 12 20:55:01.530953 waagent[1807]: 2024-11-12T20:55:01.530878Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 12 20:55:01.536531 waagent[1807]: 2024-11-12T20:55:01.534069Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 12 20:55:01.540004 waagent[1807]: 2024-11-12T20:55:01.539459Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 12 20:55:01.542963 waagent[1807]: 2024-11-12T20:55:01.541956Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 12 20:55:01.563868 waagent[1807]: 2024-11-12T20:55:01.563456Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 12 20:55:01.567820 waagent[1807]: 2024-11-12T20:55:01.567628Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 12 20:55:01.570283 waagent[1807]: 2024-11-12T20:55:01.570163Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 12 20:55:01.684841 login[1789]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:55:01.689990 login[1790]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:55:01.702886 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:55:01.709492 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:55:01.712103 waagent[1807]: 2024-11-12T20:55:01.710812Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 12 20:55:01.715168 waagent[1807]: 2024-11-12T20:55:01.714590Z INFO Daemon Daemon Forcing an update of the goal state. Nov 12 20:55:01.715587 systemd-logind[1689]: New session 1 of user core. Nov 12 20:55:01.721216 waagent[1807]: 2024-11-12T20:55:01.719722Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 12 20:55:01.727325 systemd-logind[1689]: New session 2 of user core. Nov 12 20:55:01.737737 waagent[1807]: 2024-11-12T20:55:01.736160Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Nov 12 20:55:01.737737 waagent[1807]: 2024-11-12T20:55:01.736891Z INFO Daemon Nov 12 20:55:01.738643 waagent[1807]: 2024-11-12T20:55:01.738594Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 31aeef98-75db-484f-9d2e-62159b2ab8b6 eTag: 15090986421920346278 source: Fabric] Nov 12 20:55:01.739731 waagent[1807]: 2024-11-12T20:55:01.739691Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 12 20:55:01.741281 waagent[1807]: 2024-11-12T20:55:01.741239Z INFO Daemon Nov 12 20:55:01.742015 waagent[1807]: 2024-11-12T20:55:01.741980Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 12 20:55:01.751595 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:55:01.754382 waagent[1807]: 2024-11-12T20:55:01.754313Z INFO Daemon Daemon Downloading artifacts profile blob Nov 12 20:55:01.760884 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:55:01.765890 (systemd)[1835]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:55:01.846674 waagent[1807]: 2024-11-12T20:55:01.845910Z INFO Daemon Downloaded certificate {'thumbprint': 'B4694553D9EA0A5CBF89760FB26C317BD15E4C35', 'hasPrivateKey': True} Nov 12 20:55:01.847119 waagent[1807]: 2024-11-12T20:55:01.847052Z INFO Daemon Downloaded certificate {'thumbprint': '95BF945507D940E97582887B13A43A9BE5C8C65C', 'hasPrivateKey': False} Nov 12 20:55:01.847727 waagent[1807]: 2024-11-12T20:55:01.847679Z INFO Daemon Fetch goal state completed Nov 12 20:55:01.860211 waagent[1807]: 2024-11-12T20:55:01.858472Z INFO Daemon Daemon Starting provisioning Nov 12 20:55:01.863362 waagent[1807]: 2024-11-12T20:55:01.863202Z INFO Daemon Daemon Handle ovf-env.xml. Nov 12 20:55:01.866969 waagent[1807]: 2024-11-12T20:55:01.866901Z INFO Daemon Daemon Set hostname [ci-4081.2.0-a-d8aa37ea01] Nov 12 20:55:01.879977 waagent[1807]: 2024-11-12T20:55:01.877714Z INFO Daemon Daemon Publish hostname [ci-4081.2.0-a-d8aa37ea01] Nov 12 20:55:01.883220 waagent[1807]: 2024-11-12T20:55:01.881581Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 12 20:55:01.887233 waagent[1807]: 2024-11-12T20:55:01.884920Z INFO Daemon Daemon Primary interface is [eth0] Nov 12 20:55:01.912143 systemd-networkd[1569]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:55:01.913786 systemd-networkd[1569]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:55:01.913924 systemd-networkd[1569]: eth0: DHCP lease lost Nov 12 20:55:01.915588 waagent[1807]: 2024-11-12T20:55:01.915496Z INFO Daemon Daemon Create user account if not exists Nov 12 20:55:01.920440 waagent[1807]: 2024-11-12T20:55:01.919932Z INFO Daemon Daemon User core already exists, skip useradd Nov 12 20:55:01.923114 systemd-networkd[1569]: eth0: DHCPv6 lease lost Nov 12 20:55:01.925106 waagent[1807]: 2024-11-12T20:55:01.925044Z INFO Daemon Daemon Configure sudoer Nov 12 20:55:01.931490 waagent[1807]: 2024-11-12T20:55:01.931433Z INFO Daemon Daemon Configure sshd Nov 12 20:55:01.941039 waagent[1807]: 2024-11-12T20:55:01.934676Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 12 20:55:01.941375 waagent[1807]: 2024-11-12T20:55:01.941323Z INFO Daemon Daemon Deploy ssh public key. Nov 12 20:55:01.965253 systemd-networkd[1569]: eth0: DHCPv4 address 10.200.8.15/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:55:01.978565 systemd[1835]: Queued start job for default target default.target. Nov 12 20:55:01.985321 systemd[1835]: Created slice app.slice - User Application Slice. Nov 12 20:55:01.985361 systemd[1835]: Reached target paths.target - Paths. Nov 12 20:55:01.985381 systemd[1835]: Reached target timers.target - Timers. Nov 12 20:55:01.987795 systemd[1835]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:55:02.007986 systemd[1835]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:55:02.008318 systemd[1835]: Reached target sockets.target - Sockets. Nov 12 20:55:02.008447 systemd[1835]: Reached target basic.target - Basic System. Nov 12 20:55:02.008602 systemd[1835]: Reached target default.target - Main User Target. Nov 12 20:55:02.008725 systemd[1835]: Startup finished in 235ms. Nov 12 20:55:02.008898 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:55:02.015891 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:55:02.017181 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:55:02.466576 kubelet[1824]: E1112 20:55:02.466492 1824 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:02.469357 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:02.469545 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:03.065818 waagent[1807]: 2024-11-12T20:55:03.065700Z INFO Daemon Daemon Provisioning complete Nov 12 20:55:03.077288 waagent[1807]: 2024-11-12T20:55:03.077229Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 12 20:55:03.077999 waagent[1807]: 2024-11-12T20:55:03.077520Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 12 20:55:03.079904 waagent[1807]: 2024-11-12T20:55:03.078031Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 12 20:55:03.201120 waagent[1888]: 2024-11-12T20:55:03.201019Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 12 20:55:03.201528 waagent[1888]: 2024-11-12T20:55:03.201183Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.0 Nov 12 20:55:03.201528 waagent[1888]: 2024-11-12T20:55:03.201292Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 12 20:55:03.215582 waagent[1888]: 2024-11-12T20:55:03.215513Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 12 20:55:03.215769 waagent[1888]: 2024-11-12T20:55:03.215724Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:55:03.215854 waagent[1888]: 2024-11-12T20:55:03.215814Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:55:03.222674 waagent[1888]: 2024-11-12T20:55:03.222610Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 12 20:55:03.227516 waagent[1888]: 2024-11-12T20:55:03.227464Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Nov 12 20:55:03.227956 waagent[1888]: 2024-11-12T20:55:03.227904Z INFO ExtHandler Nov 12 20:55:03.228032 waagent[1888]: 2024-11-12T20:55:03.227994Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 402f92e9-0fb4-41f7-939e-1ec8b79adb40 eTag: 15090986421920346278 source: Fabric] Nov 12 20:55:03.228377 waagent[1888]: 2024-11-12T20:55:03.228324Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 12 20:55:03.228911 waagent[1888]: 2024-11-12T20:55:03.228856Z INFO ExtHandler Nov 12 20:55:03.228973 waagent[1888]: 2024-11-12T20:55:03.228936Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 12 20:55:03.232399 waagent[1888]: 2024-11-12T20:55:03.232361Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 12 20:55:03.294001 waagent[1888]: 2024-11-12T20:55:03.293916Z INFO ExtHandler Downloaded certificate {'thumbprint': 'B4694553D9EA0A5CBF89760FB26C317BD15E4C35', 'hasPrivateKey': True} Nov 12 20:55:03.294424 waagent[1888]: 2024-11-12T20:55:03.294367Z INFO ExtHandler Downloaded certificate {'thumbprint': '95BF945507D940E97582887B13A43A9BE5C8C65C', 'hasPrivateKey': False} Nov 12 20:55:03.294839 waagent[1888]: 2024-11-12T20:55:03.294789Z INFO ExtHandler Fetch goal state completed Nov 12 20:55:03.309618 waagent[1888]: 2024-11-12T20:55:03.309559Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1888 Nov 12 20:55:03.309765 waagent[1888]: 2024-11-12T20:55:03.309718Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 12 20:55:03.311355 waagent[1888]: 2024-11-12T20:55:03.311300Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 12 20:55:03.311729 waagent[1888]: 2024-11-12T20:55:03.311677Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 12 20:55:03.329749 waagent[1888]: 2024-11-12T20:55:03.329655Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 12 20:55:03.329921 waagent[1888]: 2024-11-12T20:55:03.329869Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 12 20:55:03.337468 waagent[1888]: 2024-11-12T20:55:03.337398Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 12 20:55:03.344071 systemd[1]: Reloading requested from client PID 1903 ('systemctl') (unit waagent.service)... Nov 12 20:55:03.344088 systemd[1]: Reloading... Nov 12 20:55:03.438279 zram_generator::config[1937]: No configuration found. Nov 12 20:55:03.551029 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:03.630167 systemd[1]: Reloading finished in 285 ms. Nov 12 20:55:03.656958 waagent[1888]: 2024-11-12T20:55:03.656850Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 12 20:55:03.665281 systemd[1]: Reloading requested from client PID 1994 ('systemctl') (unit waagent.service)... Nov 12 20:55:03.665295 systemd[1]: Reloading... Nov 12 20:55:03.737212 zram_generator::config[2024]: No configuration found. Nov 12 20:55:03.868491 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:55:03.947561 systemd[1]: Reloading finished in 281 ms. Nov 12 20:55:03.975499 waagent[1888]: 2024-11-12T20:55:03.975025Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 12 20:55:03.975499 waagent[1888]: 2024-11-12T20:55:03.975249Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 12 20:55:04.148525 waagent[1888]: 2024-11-12T20:55:04.148439Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 12 20:55:04.149072 waagent[1888]: 2024-11-12T20:55:04.149011Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 12 20:55:04.149826 waagent[1888]: 2024-11-12T20:55:04.149767Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 12 20:55:04.149949 waagent[1888]: 2024-11-12T20:55:04.149905Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:55:04.150289 waagent[1888]: 2024-11-12T20:55:04.150235Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:55:04.150484 waagent[1888]: 2024-11-12T20:55:04.150440Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 12 20:55:04.150991 waagent[1888]: 2024-11-12T20:55:04.150900Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 12 20:55:04.151116 waagent[1888]: 2024-11-12T20:55:04.151079Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 12 20:55:04.151372 waagent[1888]: 2024-11-12T20:55:04.151327Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 12 20:55:04.151959 waagent[1888]: 2024-11-12T20:55:04.151910Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 12 20:55:04.152162 waagent[1888]: 2024-11-12T20:55:04.152079Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:55:04.152242 waagent[1888]: 2024-11-12T20:55:04.152182Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 12 20:55:04.152536 waagent[1888]: 2024-11-12T20:55:04.152495Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 12 20:55:04.152889 waagent[1888]: 2024-11-12T20:55:04.152823Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:55:04.153361 waagent[1888]: 2024-11-12T20:55:04.153224Z INFO EnvHandler ExtHandler Configure routes Nov 12 20:55:04.153361 waagent[1888]: 2024-11-12T20:55:04.153277Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 12 20:55:04.153361 waagent[1888]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 12 20:55:04.153361 waagent[1888]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 12 20:55:04.153361 waagent[1888]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 12 20:55:04.153361 waagent[1888]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:55:04.153361 waagent[1888]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:55:04.153361 waagent[1888]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:55:04.154249 waagent[1888]: 2024-11-12T20:55:04.154205Z INFO EnvHandler ExtHandler Gateway:None Nov 12 20:55:04.155334 waagent[1888]: 2024-11-12T20:55:04.155281Z INFO EnvHandler ExtHandler Routes:None Nov 12 20:55:04.159872 waagent[1888]: 2024-11-12T20:55:04.159828Z INFO ExtHandler ExtHandler Nov 12 20:55:04.159974 waagent[1888]: 2024-11-12T20:55:04.159930Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 91ce7ee5-244b-41ae-941e-7038be974e5d correlation 148057c8-e6b6-4340-b94e-24d4c3b739d4 created: 2024-11-12T20:54:27.696976Z] Nov 12 20:55:04.161018 waagent[1888]: 2024-11-12T20:55:04.160976Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 12 20:55:04.163946 waagent[1888]: 2024-11-12T20:55:04.163900Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 4 ms] Nov 12 20:55:04.176604 waagent[1888]: 2024-11-12T20:55:04.176111Z INFO MonitorHandler ExtHandler Network interfaces: Nov 12 20:55:04.176604 waagent[1888]: Executing ['ip', '-a', '-o', 'link']: Nov 12 20:55:04.176604 waagent[1888]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 12 20:55:04.176604 waagent[1888]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b5:83:cd brd ff:ff:ff:ff:ff:ff Nov 12 20:55:04.176604 waagent[1888]: 3: enP58578s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b5:83:cd brd ff:ff:ff:ff:ff:ff\ altname enP58578p0s2 Nov 12 20:55:04.176604 waagent[1888]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 12 20:55:04.176604 waagent[1888]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 12 20:55:04.176604 waagent[1888]: 2: eth0 inet 10.200.8.15/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 12 20:55:04.176604 waagent[1888]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 12 20:55:04.176604 waagent[1888]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 12 20:55:04.176604 waagent[1888]: 2: eth0 inet6 fe80::20d:3aff:feb5:83cd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 12 20:55:04.176604 waagent[1888]: 3: enP58578s1 inet6 fe80::20d:3aff:feb5:83cd/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 12 20:55:04.199415 waagent[1888]: 2024-11-12T20:55:04.199325Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: BB3200C3-4977-421D-81F5-E5E013BC2AB8;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 12 20:55:04.246773 waagent[1888]: 2024-11-12T20:55:04.246711Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 12 20:55:04.246773 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:55:04.246773 waagent[1888]: pkts bytes target prot opt in out source destination Nov 12 20:55:04.246773 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:55:04.246773 waagent[1888]: pkts bytes target prot opt in out source destination Nov 12 20:55:04.246773 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:55:04.246773 waagent[1888]: pkts bytes target prot opt in out source destination Nov 12 20:55:04.246773 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 12 20:55:04.246773 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 12 20:55:04.246773 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 12 20:55:04.250059 waagent[1888]: 2024-11-12T20:55:04.250003Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 12 20:55:04.250059 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:55:04.250059 waagent[1888]: pkts bytes target prot opt in out source destination Nov 12 20:55:04.250059 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:55:04.250059 waagent[1888]: pkts bytes target prot opt in out source destination Nov 12 20:55:04.250059 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:55:04.250059 waagent[1888]: pkts bytes target prot opt in out source destination Nov 12 20:55:04.250059 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 12 20:55:04.250059 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 12 20:55:04.250059 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 12 20:55:04.250513 waagent[1888]: 2024-11-12T20:55:04.250330Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 12 20:55:12.720275 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:55:12.725490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:12.820095 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:12.831535 (kubelet)[2124]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:13.432538 kubelet[2124]: E1112 20:55:13.432457 2124 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:13.436556 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:13.436767 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:22.806566 chronyd[1677]: Selected source PHC0 Nov 12 20:55:23.687359 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:55:23.693418 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:23.785718 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:23.790459 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:24.465934 kubelet[2140]: E1112 20:55:24.465871 2140 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:24.468530 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:24.468734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:34.719223 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:55:34.725477 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:34.819106 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:34.823689 (kubelet)[2157]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:35.244504 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:55:35.250485 systemd[1]: Started sshd@0-10.200.8.15:22-10.200.16.10:45276.service - OpenSSH per-connection server daemon (10.200.16.10:45276). Nov 12 20:55:35.370546 kubelet[2157]: E1112 20:55:35.370486 2157 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:35.373115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:35.373322 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:35.977368 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 45276 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:35.979144 sshd[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:35.984682 systemd-logind[1689]: New session 3 of user core. Nov 12 20:55:35.993343 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:55:36.529323 systemd[1]: Started sshd@1-10.200.8.15:22-10.200.16.10:45278.service - OpenSSH per-connection server daemon (10.200.16.10:45278). Nov 12 20:55:37.154038 sshd[2171]: Accepted publickey for core from 10.200.16.10 port 45278 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:37.155823 sshd[2171]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:37.160873 systemd-logind[1689]: New session 4 of user core. Nov 12 20:55:37.167349 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:55:37.603090 sshd[2171]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:37.607387 systemd[1]: sshd@1-10.200.8.15:22-10.200.16.10:45278.service: Deactivated successfully. Nov 12 20:55:37.609140 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:55:37.609873 systemd-logind[1689]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:55:37.610836 systemd-logind[1689]: Removed session 4. Nov 12 20:55:37.717432 systemd[1]: Started sshd@2-10.200.8.15:22-10.200.16.10:45280.service - OpenSSH per-connection server daemon (10.200.16.10:45280). Nov 12 20:55:38.342682 sshd[2178]: Accepted publickey for core from 10.200.16.10 port 45280 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:38.344223 sshd[2178]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:38.348739 systemd-logind[1689]: New session 5 of user core. Nov 12 20:55:38.355354 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:55:38.784044 sshd[2178]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:38.789177 systemd[1]: sshd@2-10.200.8.15:22-10.200.16.10:45280.service: Deactivated successfully. Nov 12 20:55:38.791387 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:55:38.792244 systemd-logind[1689]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:55:38.793406 systemd-logind[1689]: Removed session 5. Nov 12 20:55:38.895038 systemd[1]: Started sshd@3-10.200.8.15:22-10.200.16.10:32816.service - OpenSSH per-connection server daemon (10.200.16.10:32816). Nov 12 20:55:39.520461 sshd[2185]: Accepted publickey for core from 10.200.16.10 port 32816 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:39.521921 sshd[2185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:39.526387 systemd-logind[1689]: New session 6 of user core. Nov 12 20:55:39.535366 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:55:39.968448 sshd[2185]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:39.971315 systemd[1]: sshd@3-10.200.8.15:22-10.200.16.10:32816.service: Deactivated successfully. Nov 12 20:55:39.973672 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:55:39.975522 systemd-logind[1689]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:55:39.976649 systemd-logind[1689]: Removed session 6. Nov 12 20:55:40.085702 systemd[1]: Started sshd@4-10.200.8.15:22-10.200.16.10:32822.service - OpenSSH per-connection server daemon (10.200.16.10:32822). Nov 12 20:55:40.711264 sshd[2192]: Accepted publickey for core from 10.200.16.10 port 32822 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:40.712717 sshd[2192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:40.718030 systemd-logind[1689]: New session 7 of user core. Nov 12 20:55:40.727365 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:55:41.094315 sudo[2195]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:55:41.094815 sudo[2195]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:41.111477 sudo[2195]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:41.214721 sshd[2192]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:41.218530 systemd[1]: sshd@4-10.200.8.15:22-10.200.16.10:32822.service: Deactivated successfully. Nov 12 20:55:41.220568 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:55:41.222117 systemd-logind[1689]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:55:41.223173 systemd-logind[1689]: Removed session 7. Nov 12 20:55:41.325508 systemd[1]: Started sshd@5-10.200.8.15:22-10.200.16.10:32824.service - OpenSSH per-connection server daemon (10.200.16.10:32824). Nov 12 20:55:41.951822 sshd[2200]: Accepted publickey for core from 10.200.16.10 port 32824 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:41.953379 sshd[2200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:41.958633 systemd-logind[1689]: New session 8 of user core. Nov 12 20:55:41.965327 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:55:42.297885 sudo[2204]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:55:42.298338 sudo[2204]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:42.301736 sudo[2204]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:42.306519 sudo[2203]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:55:42.306848 sudo[2203]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:42.318501 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:42.320542 auditctl[2207]: No rules Nov 12 20:55:42.320897 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:55:42.321084 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:42.323695 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:55:42.348685 augenrules[2225]: No rules Nov 12 20:55:42.350066 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:55:42.351264 sudo[2203]: pam_unix(sudo:session): session closed for user root Nov 12 20:55:42.453554 sshd[2200]: pam_unix(sshd:session): session closed for user core Nov 12 20:55:42.458127 systemd[1]: sshd@5-10.200.8.15:22-10.200.16.10:32824.service: Deactivated successfully. Nov 12 20:55:42.460439 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:55:42.461337 systemd-logind[1689]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:55:42.462215 systemd-logind[1689]: Removed session 8. Nov 12 20:55:42.564696 systemd[1]: Started sshd@6-10.200.8.15:22-10.200.16.10:32836.service - OpenSSH per-connection server daemon (10.200.16.10:32836). Nov 12 20:55:43.197709 sshd[2233]: Accepted publickey for core from 10.200.16.10 port 32836 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:55:43.199172 sshd[2233]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:55:43.203659 systemd-logind[1689]: New session 9 of user core. Nov 12 20:55:43.213345 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:55:43.542446 sudo[2236]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:55:43.542798 sudo[2236]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:55:44.666016 update_engine[1692]: I20241112 20:55:44.665915 1692 update_attempter.cc:509] Updating boot flags... Nov 12 20:55:44.763181 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2257) Nov 12 20:55:44.880232 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2257) Nov 12 20:55:45.447441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 20:55:45.452448 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:55:46.368068 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 12 20:55:46.555640 (dockerd)[2320]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:55:46.555905 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:55:49.799265 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:55:49.803965 (kubelet)[2326]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:55:49.850327 kubelet[2326]: E1112 20:55:49.850249 2326 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:55:49.852934 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:55:49.853147 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:55:50.640073 dockerd[2320]: time="2024-11-12T20:55:50.640016889Z" level=info msg="Starting up" Nov 12 20:55:51.770244 dockerd[2320]: time="2024-11-12T20:55:51.770169979Z" level=info msg="Loading containers: start." Nov 12 20:55:51.952217 kernel: Initializing XFRM netlink socket Nov 12 20:55:52.019341 systemd-networkd[1569]: docker0: Link UP Nov 12 20:55:52.085644 dockerd[2320]: time="2024-11-12T20:55:52.085345073Z" level=info msg="Loading containers: done." Nov 12 20:55:52.624029 dockerd[2320]: time="2024-11-12T20:55:52.623970166Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:55:52.624306 dockerd[2320]: time="2024-11-12T20:55:52.624116568Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:55:52.624372 dockerd[2320]: time="2024-11-12T20:55:52.624316271Z" level=info msg="Daemon has completed initialization" Nov 12 20:55:52.888115 dockerd[2320]: time="2024-11-12T20:55:52.887552475Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:55:52.887816 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:55:55.138634 containerd[1712]: time="2024-11-12T20:55:55.138594102Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\"" Nov 12 20:55:55.766376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount928589529.mount: Deactivated successfully. Nov 12 20:55:57.579435 containerd[1712]: time="2024-11-12T20:55:57.579376648Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:57.581499 containerd[1712]: time="2024-11-12T20:55:57.581443074Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.10: active requests=0, bytes read=35140807" Nov 12 20:55:57.585557 containerd[1712]: time="2024-11-12T20:55:57.585502626Z" level=info msg="ImageCreate event name:\"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:57.591537 containerd[1712]: time="2024-11-12T20:55:57.591345100Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:57.593065 containerd[1712]: time="2024-11-12T20:55:57.592788418Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.10\" with image id \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b4362c227fb9a8e1961e17bc5cb55e3fea4414da9936d71663d223d7eda23669\", size \"35137599\" in 2.454154015s" Nov 12 20:55:57.593065 containerd[1712]: time="2024-11-12T20:55:57.592829519Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.10\" returns image reference \"sha256:18c48eab348cb2ea0d360be7cb2530f47a017434fa672c694e839f837137ffe0\"" Nov 12 20:55:57.613556 containerd[1712]: time="2024-11-12T20:55:57.613519581Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\"" Nov 12 20:55:59.498561 containerd[1712]: time="2024-11-12T20:55:59.498504380Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:59.502212 containerd[1712]: time="2024-11-12T20:55:59.502140126Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.10: active requests=0, bytes read=32218307" Nov 12 20:55:59.505976 containerd[1712]: time="2024-11-12T20:55:59.505920574Z" level=info msg="ImageCreate event name:\"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:59.513600 containerd[1712]: time="2024-11-12T20:55:59.513542471Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:59.514773 containerd[1712]: time="2024-11-12T20:55:59.514609285Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.10\" with image id \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:d74524a4d9d071510c5abb6404bf4daf2609510d8d5f0683e1efd83d69176647\", size \"33663665\" in 1.900893601s" Nov 12 20:55:59.514773 containerd[1712]: time="2024-11-12T20:55:59.514650685Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.10\" returns image reference \"sha256:ad191b766a6c87c02578cced8268155fd86b78f8f096775f9d4c3a8f8dccf6bf\"" Nov 12 20:55:59.538852 containerd[1712]: time="2024-11-12T20:55:59.538804091Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\"" Nov 12 20:55:59.947663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 12 20:55:59.955402 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:00.053681 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:00.058416 (kubelet)[2552]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:56:00.101534 kubelet[2552]: E1112 20:56:00.101474 2552 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:56:00.104089 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:56:00.104318 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:56:01.319281 containerd[1712]: time="2024-11-12T20:56:01.319221065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:01.321128 containerd[1712]: time="2024-11-12T20:56:01.321054288Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.10: active requests=0, bytes read=17332668" Nov 12 20:56:01.325737 containerd[1712]: time="2024-11-12T20:56:01.325599046Z" level=info msg="ImageCreate event name:\"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:01.332675 containerd[1712]: time="2024-11-12T20:56:01.332614235Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:01.333954 containerd[1712]: time="2024-11-12T20:56:01.333753549Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.10\" with image id \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:41f2fb005da3fa5512bfc7f267a6f08aaea27c9f7c6d9a93c7ee28607c1f2f77\", size \"18778044\" in 1.794902557s" Nov 12 20:56:01.333954 containerd[1712]: time="2024-11-12T20:56:01.333796750Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.10\" returns image reference \"sha256:27a6d029a6b019de099d92bd417a4e40c98e146a04faaab836138abf6307034d\"" Nov 12 20:56:01.356084 containerd[1712]: time="2024-11-12T20:56:01.356054432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\"" Nov 12 20:56:02.607694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3683114036.mount: Deactivated successfully. Nov 12 20:56:03.062483 containerd[1712]: time="2024-11-12T20:56:03.062426013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:03.065077 containerd[1712]: time="2024-11-12T20:56:03.065015648Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.10: active requests=0, bytes read=28616824" Nov 12 20:56:03.068609 containerd[1712]: time="2024-11-12T20:56:03.068556796Z" level=info msg="ImageCreate event name:\"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:03.072580 containerd[1712]: time="2024-11-12T20:56:03.072546949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:03.073272 containerd[1712]: time="2024-11-12T20:56:03.073099257Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.10\" with image id \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\", repo tag \"registry.k8s.io/kube-proxy:v1.29.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:3c5ceb7942f21793d4cb5880bc0ed7ca7d7f93318fc3f0830816593b86aa19d8\", size \"28615835\" in 1.717008625s" Nov 12 20:56:03.073272 containerd[1712]: time="2024-11-12T20:56:03.073138257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.10\" returns image reference \"sha256:561e7e8f714aae262c52c7ea98efdabecf299956499c8a2c63eab6759906f0a4\"" Nov 12 20:56:03.095205 containerd[1712]: time="2024-11-12T20:56:03.095170653Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:56:03.717966 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount661176914.mount: Deactivated successfully. Nov 12 20:56:04.962304 containerd[1712]: time="2024-11-12T20:56:04.962249633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:04.965143 containerd[1712]: time="2024-11-12T20:56:04.964928769Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Nov 12 20:56:04.968468 containerd[1712]: time="2024-11-12T20:56:04.968066612Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:04.973292 containerd[1712]: time="2024-11-12T20:56:04.973257781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:04.974460 containerd[1712]: time="2024-11-12T20:56:04.974301895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.879081741s" Nov 12 20:56:04.974460 containerd[1712]: time="2024-11-12T20:56:04.974342696Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:56:04.996476 containerd[1712]: time="2024-11-12T20:56:04.996430293Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:56:05.524895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4227196764.mount: Deactivated successfully. Nov 12 20:56:05.548228 containerd[1712]: time="2024-11-12T20:56:05.548161404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:05.550362 containerd[1712]: time="2024-11-12T20:56:05.550310233Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Nov 12 20:56:05.555301 containerd[1712]: time="2024-11-12T20:56:05.555249899Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:05.560541 containerd[1712]: time="2024-11-12T20:56:05.560506070Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:05.561688 containerd[1712]: time="2024-11-12T20:56:05.561542984Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 565.07069ms" Nov 12 20:56:05.561688 containerd[1712]: time="2024-11-12T20:56:05.561584884Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:56:05.587784 containerd[1712]: time="2024-11-12T20:56:05.587615534Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Nov 12 20:56:06.279489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2628204530.mount: Deactivated successfully. Nov 12 20:56:08.549738 containerd[1712]: time="2024-11-12T20:56:08.549678423Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:08.552143 containerd[1712]: time="2024-11-12T20:56:08.551984654Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Nov 12 20:56:08.555257 containerd[1712]: time="2024-11-12T20:56:08.555180197Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:08.560490 containerd[1712]: time="2024-11-12T20:56:08.560437767Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:08.561637 containerd[1712]: time="2024-11-12T20:56:08.561491582Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 2.973836447s" Nov 12 20:56:08.561637 containerd[1712]: time="2024-11-12T20:56:08.561530582Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Nov 12 20:56:10.197693 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 12 20:56:10.206296 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:10.490354 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:10.501574 (kubelet)[2748]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:56:10.870282 kubelet[2748]: E1112 20:56:10.869087 2748 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:56:10.871542 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:56:10.871726 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:56:11.654613 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:11.660471 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:11.684097 systemd[1]: Reloading requested from client PID 2764 ('systemctl') (unit session-9.scope)... Nov 12 20:56:11.684114 systemd[1]: Reloading... Nov 12 20:56:11.796220 zram_generator::config[2807]: No configuration found. Nov 12 20:56:11.929428 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:12.009020 systemd[1]: Reloading finished in 324 ms. Nov 12 20:56:12.090114 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:56:12.090261 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:56:12.090597 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:12.100584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:12.926822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:12.932454 (kubelet)[2871]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:56:12.975392 kubelet[2871]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:12.975392 kubelet[2871]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:56:12.975392 kubelet[2871]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:12.975828 kubelet[2871]: I1112 20:56:12.975443 2871 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:56:13.233544 kubelet[2871]: I1112 20:56:13.233508 2871 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:56:13.233544 kubelet[2871]: I1112 20:56:13.233537 2871 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:56:13.233817 kubelet[2871]: I1112 20:56:13.233794 2871 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:56:13.535894 kubelet[2871]: I1112 20:56:13.535758 2871 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:13.536214 kubelet[2871]: E1112 20:56:13.536040 2871 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.15:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.549793 kubelet[2871]: I1112 20:56:13.549756 2871 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:56:13.557990 kubelet[2871]: I1112 20:56:13.550028 2871 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:56:13.557990 kubelet[2871]: I1112 20:56:13.550800 2871 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:56:13.557990 kubelet[2871]: I1112 20:56:13.550859 2871 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:56:13.557990 kubelet[2871]: I1112 20:56:13.550875 2871 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:56:13.558283 kubelet[2871]: I1112 20:56:13.558144 2871 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:13.558332 kubelet[2871]: I1112 20:56:13.558288 2871 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:56:13.558332 kubelet[2871]: I1112 20:56:13.558309 2871 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:56:13.558407 kubelet[2871]: I1112 20:56:13.558343 2871 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:56:13.558407 kubelet[2871]: I1112 20:56:13.558363 2871 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:56:13.560139 kubelet[2871]: W1112 20:56:13.559586 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.560491 kubelet[2871]: E1112 20:56:13.560291 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.560491 kubelet[2871]: W1112 20:56:13.560391 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-d8aa37ea01&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.560491 kubelet[2871]: E1112 20:56:13.560439 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-d8aa37ea01&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.561110 kubelet[2871]: I1112 20:56:13.560786 2871 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:56:13.564319 kubelet[2871]: I1112 20:56:13.564296 2871 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:56:13.564415 kubelet[2871]: W1112 20:56:13.564377 2871 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:56:13.565025 kubelet[2871]: I1112 20:56:13.564993 2871 server.go:1256] "Started kubelet" Nov 12 20:56:13.565160 kubelet[2871]: I1112 20:56:13.565138 2871 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:56:13.566332 kubelet[2871]: I1112 20:56:13.566028 2871 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:56:13.568867 kubelet[2871]: I1112 20:56:13.568649 2871 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:56:13.570076 kubelet[2871]: I1112 20:56:13.570057 2871 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:56:13.570994 kubelet[2871]: I1112 20:56:13.570959 2871 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:56:13.572230 kubelet[2871]: I1112 20:56:13.571414 2871 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:56:13.572462 kubelet[2871]: E1112 20:56:13.572446 2871 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-a-d8aa37ea01.18075407a5796fa2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-d8aa37ea01,UID:ci-4081.2.0-a-d8aa37ea01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-d8aa37ea01,},FirstTimestamp:2024-11-12 20:56:13.564948386 +0000 UTC m=+0.628201266,LastTimestamp:2024-11-12 20:56:13.564948386 +0000 UTC m=+0.628201266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-d8aa37ea01,}" Nov 12 20:56:13.573369 kubelet[2871]: I1112 20:56:13.573332 2871 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:56:13.573995 kubelet[2871]: I1112 20:56:13.573730 2871 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:56:13.574197 kubelet[2871]: W1112 20:56:13.574105 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.574263 kubelet[2871]: E1112 20:56:13.574219 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.574521 kubelet[2871]: E1112 20:56:13.574498 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-d8aa37ea01?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="200ms" Nov 12 20:56:13.576670 kubelet[2871]: I1112 20:56:13.575888 2871 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:56:13.576670 kubelet[2871]: I1112 20:56:13.575979 2871 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:56:13.578265 kubelet[2871]: I1112 20:56:13.578243 2871 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:56:13.605788 kubelet[2871]: I1112 20:56:13.605765 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:56:13.608220 kubelet[2871]: I1112 20:56:13.608168 2871 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:56:13.608410 kubelet[2871]: I1112 20:56:13.608390 2871 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:56:13.608472 kubelet[2871]: I1112 20:56:13.608423 2871 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:56:13.608511 kubelet[2871]: E1112 20:56:13.608473 2871 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:56:13.613758 kubelet[2871]: I1112 20:56:13.613505 2871 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:56:13.613758 kubelet[2871]: I1112 20:56:13.613522 2871 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:56:13.613758 kubelet[2871]: I1112 20:56:13.613539 2871 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:13.613931 kubelet[2871]: W1112 20:56:13.613838 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.613931 kubelet[2871]: E1112 20:56:13.613889 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:13.617992 kubelet[2871]: I1112 20:56:13.617968 2871 policy_none.go:49] "None policy: Start" Nov 12 20:56:13.618537 kubelet[2871]: I1112 20:56:13.618511 2871 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:56:13.618537 kubelet[2871]: I1112 20:56:13.618537 2871 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:56:13.627974 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:56:13.637869 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:56:13.643915 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:56:13.652118 kubelet[2871]: I1112 20:56:13.651898 2871 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:56:13.652370 kubelet[2871]: I1112 20:56:13.652352 2871 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:56:13.653658 kubelet[2871]: E1112 20:56:13.653620 2871 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-a-d8aa37ea01\" not found" Nov 12 20:56:13.673435 kubelet[2871]: I1112 20:56:13.673415 2871 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.673829 kubelet[2871]: E1112 20:56:13.673808 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.709177 kubelet[2871]: I1112 20:56:13.709153 2871 topology_manager.go:215] "Topology Admit Handler" podUID="05f281f3f4f0545c56ad1b1b5d9fda48" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.710753 kubelet[2871]: I1112 20:56:13.710653 2871 topology_manager.go:215] "Topology Admit Handler" podUID="acdf0f90a675c8a64be57c5e58a8f86c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.712376 kubelet[2871]: I1112 20:56:13.712258 2871 topology_manager.go:215] "Topology Admit Handler" podUID="59177b775832564b4d81ef77bebda8ff" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.718809 systemd[1]: Created slice kubepods-burstable-pod05f281f3f4f0545c56ad1b1b5d9fda48.slice - libcontainer container kubepods-burstable-pod05f281f3f4f0545c56ad1b1b5d9fda48.slice. Nov 12 20:56:13.739212 systemd[1]: Created slice kubepods-burstable-podacdf0f90a675c8a64be57c5e58a8f86c.slice - libcontainer container kubepods-burstable-podacdf0f90a675c8a64be57c5e58a8f86c.slice. Nov 12 20:56:13.744121 systemd[1]: Created slice kubepods-burstable-pod59177b775832564b4d81ef77bebda8ff.slice - libcontainer container kubepods-burstable-pod59177b775832564b4d81ef77bebda8ff.slice. Nov 12 20:56:13.775458 kubelet[2871]: E1112 20:56:13.775428 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-d8aa37ea01?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="400ms" Nov 12 20:56:13.875166 kubelet[2871]: I1112 20:56:13.874930 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875166 kubelet[2871]: I1112 20:56:13.875002 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05f281f3f4f0545c56ad1b1b5d9fda48-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" (UID: \"05f281f3f4f0545c56ad1b1b5d9fda48\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875166 kubelet[2871]: I1112 20:56:13.875041 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875166 kubelet[2871]: I1112 20:56:13.875072 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875166 kubelet[2871]: I1112 20:56:13.875107 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875549 kubelet[2871]: I1112 20:56:13.875139 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875549 kubelet[2871]: I1112 20:56:13.875170 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59177b775832564b4d81ef77bebda8ff-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-d8aa37ea01\" (UID: \"59177b775832564b4d81ef77bebda8ff\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875549 kubelet[2871]: I1112 20:56:13.875230 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05f281f3f4f0545c56ad1b1b5d9fda48-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" (UID: \"05f281f3f4f0545c56ad1b1b5d9fda48\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.875549 kubelet[2871]: I1112 20:56:13.875264 2871 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05f281f3f4f0545c56ad1b1b5d9fda48-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" (UID: \"05f281f3f4f0545c56ad1b1b5d9fda48\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.877586 kubelet[2871]: I1112 20:56:13.877551 2871 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:13.878013 kubelet[2871]: E1112 20:56:13.877963 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:14.038420 containerd[1712]: time="2024-11-12T20:56:14.038011386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-d8aa37ea01,Uid:05f281f3f4f0545c56ad1b1b5d9fda48,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:14.042623 containerd[1712]: time="2024-11-12T20:56:14.042581550Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-d8aa37ea01,Uid:acdf0f90a675c8a64be57c5e58a8f86c,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:14.049601 containerd[1712]: time="2024-11-12T20:56:14.049489346Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-d8aa37ea01,Uid:59177b775832564b4d81ef77bebda8ff,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:14.176041 kubelet[2871]: E1112 20:56:14.175930 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-d8aa37ea01?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="800ms" Nov 12 20:56:14.280639 kubelet[2871]: I1112 20:56:14.280599 2871 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:14.280989 kubelet[2871]: E1112 20:56:14.280969 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:14.421254 kubelet[2871]: E1112 20:56:14.421209 2871 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.15:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.15:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-a-d8aa37ea01.18075407a5796fa2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-d8aa37ea01,UID:ci-4081.2.0-a-d8aa37ea01,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-d8aa37ea01,},FirstTimestamp:2024-11-12 20:56:13.564948386 +0000 UTC m=+0.628201266,LastTimestamp:2024-11-12 20:56:13.564948386 +0000 UTC m=+0.628201266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-d8aa37ea01,}" Nov 12 20:56:14.427660 kubelet[2871]: W1112 20:56:14.427535 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:14.427660 kubelet[2871]: E1112 20:56:14.427598 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.15:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:14.618937 kubelet[2871]: W1112 20:56:14.618880 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-d8aa37ea01&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:14.618937 kubelet[2871]: E1112 20:56:14.618939 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.15:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-d8aa37ea01&limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:14.677048 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1059217379.mount: Deactivated successfully. Nov 12 20:56:14.711275 containerd[1712]: time="2024-11-12T20:56:14.711215078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:14.714569 containerd[1712]: time="2024-11-12T20:56:14.714525425Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:14.717372 containerd[1712]: time="2024-11-12T20:56:14.717297263Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 12 20:56:14.720016 containerd[1712]: time="2024-11-12T20:56:14.719979001Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:56:14.722857 containerd[1712]: time="2024-11-12T20:56:14.722818640Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:14.725701 containerd[1712]: time="2024-11-12T20:56:14.725664980Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:14.728254 containerd[1712]: time="2024-11-12T20:56:14.727968312Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:56:14.731503 containerd[1712]: time="2024-11-12T20:56:14.731471961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:56:14.732229 containerd[1712]: time="2024-11-12T20:56:14.732179271Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.064784ms" Nov 12 20:56:14.734386 containerd[1712]: time="2024-11-12T20:56:14.734354801Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 691.69925ms" Nov 12 20:56:14.734928 containerd[1712]: time="2024-11-12T20:56:14.734896309Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 685.347362ms" Nov 12 20:56:14.973659 kubelet[2871]: W1112 20:56:14.973521 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:14.973659 kubelet[2871]: E1112 20:56:14.973584 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.15:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:14.976887 kubelet[2871]: E1112 20:56:14.976858 2871 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.15:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-d8aa37ea01?timeout=10s\": dial tcp 10.200.8.15:6443: connect: connection refused" interval="1.6s" Nov 12 20:56:15.034613 containerd[1712]: time="2024-11-12T20:56:15.034332086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:15.034613 containerd[1712]: time="2024-11-12T20:56:15.034426088Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:15.034613 containerd[1712]: time="2024-11-12T20:56:15.034482889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:15.038157 containerd[1712]: time="2024-11-12T20:56:15.037680033Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:15.038834 containerd[1712]: time="2024-11-12T20:56:15.038552545Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:15.038834 containerd[1712]: time="2024-11-12T20:56:15.038607946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:15.038834 containerd[1712]: time="2024-11-12T20:56:15.038644047Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:15.038834 containerd[1712]: time="2024-11-12T20:56:15.038736048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:15.046201 containerd[1712]: time="2024-11-12T20:56:15.045874147Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:15.046358 containerd[1712]: time="2024-11-12T20:56:15.046307053Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:15.046860 containerd[1712]: time="2024-11-12T20:56:15.046378954Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:15.046860 containerd[1712]: time="2024-11-12T20:56:15.046561357Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:15.076334 systemd[1]: Started cri-containerd-17797daf36c97cdeb7ec0c78b03d7e5698a3c36c75d15de48be5b18d6164e4c4.scope - libcontainer container 17797daf36c97cdeb7ec0c78b03d7e5698a3c36c75d15de48be5b18d6164e4c4. Nov 12 20:56:15.077904 systemd[1]: Started cri-containerd-d53744786817879681d14256df34406d4c0a2a398e7f7f1b342fbc64e2cc198b.scope - libcontainer container d53744786817879681d14256df34406d4c0a2a398e7f7f1b342fbc64e2cc198b. Nov 12 20:56:15.082998 systemd[1]: Started cri-containerd-df5e4056aa54212be9488117e5892659e82137c85e8f24a6ef469b486233c9fc.scope - libcontainer container df5e4056aa54212be9488117e5892659e82137c85e8f24a6ef469b486233c9fc. Nov 12 20:56:15.084654 kubelet[2871]: I1112 20:56:15.083944 2871 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:15.084654 kubelet[2871]: E1112 20:56:15.084329 2871 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.15:6443/api/v1/nodes\": dial tcp 10.200.8.15:6443: connect: connection refused" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:15.084654 kubelet[2871]: W1112 20:56:15.084556 2871 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:15.084654 kubelet[2871]: E1112 20:56:15.084614 2871 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.15:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.15:6443: connect: connection refused Nov 12 20:56:15.158442 containerd[1712]: time="2024-11-12T20:56:15.158330916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-d8aa37ea01,Uid:05f281f3f4f0545c56ad1b1b5d9fda48,Namespace:kube-system,Attempt:0,} returns sandbox id \"d53744786817879681d14256df34406d4c0a2a398e7f7f1b342fbc64e2cc198b\"" Nov 12 20:56:15.164252 containerd[1712]: time="2024-11-12T20:56:15.164040796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-d8aa37ea01,Uid:acdf0f90a675c8a64be57c5e58a8f86c,Namespace:kube-system,Attempt:0,} returns sandbox id \"17797daf36c97cdeb7ec0c78b03d7e5698a3c36c75d15de48be5b18d6164e4c4\"" Nov 12 20:56:15.170359 containerd[1712]: time="2024-11-12T20:56:15.170320884Z" level=info msg="CreateContainer within sandbox \"d53744786817879681d14256df34406d4c0a2a398e7f7f1b342fbc64e2cc198b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:56:15.171367 containerd[1712]: time="2024-11-12T20:56:15.171333998Z" level=info msg="CreateContainer within sandbox \"17797daf36c97cdeb7ec0c78b03d7e5698a3c36c75d15de48be5b18d6164e4c4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:56:15.176456 containerd[1712]: time="2024-11-12T20:56:15.176411869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-d8aa37ea01,Uid:59177b775832564b4d81ef77bebda8ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"df5e4056aa54212be9488117e5892659e82137c85e8f24a6ef469b486233c9fc\"" Nov 12 20:56:15.179797 containerd[1712]: time="2024-11-12T20:56:15.179767515Z" level=info msg="CreateContainer within sandbox \"df5e4056aa54212be9488117e5892659e82137c85e8f24a6ef469b486233c9fc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:56:15.246883 containerd[1712]: time="2024-11-12T20:56:15.246755250Z" level=info msg="CreateContainer within sandbox \"d53744786817879681d14256df34406d4c0a2a398e7f7f1b342fbc64e2cc198b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"ee24bc0c6fad82cd155b66b95935df43c16adf5936ecc47f59067e568ee55fd7\"" Nov 12 20:56:15.248031 containerd[1712]: time="2024-11-12T20:56:15.247809265Z" level=info msg="StartContainer for \"ee24bc0c6fad82cd155b66b95935df43c16adf5936ecc47f59067e568ee55fd7\"" Nov 12 20:56:15.262751 containerd[1712]: time="2024-11-12T20:56:15.262702373Z" level=info msg="CreateContainer within sandbox \"df5e4056aa54212be9488117e5892659e82137c85e8f24a6ef469b486233c9fc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9dc9206ce29b48c95267c182686f914c8ac52db4c688af4747d1fecc808d8398\"" Nov 12 20:56:15.264579 containerd[1712]: time="2024-11-12T20:56:15.263268980Z" level=info msg="StartContainer for \"9dc9206ce29b48c95267c182686f914c8ac52db4c688af4747d1fecc808d8398\"" Nov 12 20:56:15.268722 containerd[1712]: time="2024-11-12T20:56:15.268690456Z" level=info msg="CreateContainer within sandbox \"17797daf36c97cdeb7ec0c78b03d7e5698a3c36c75d15de48be5b18d6164e4c4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5b6d7663e551de39b42c6cbe6c83bf33feb68cada48d013c61cfaf53e7dbf569\"" Nov 12 20:56:15.269679 containerd[1712]: time="2024-11-12T20:56:15.269645569Z" level=info msg="StartContainer for \"5b6d7663e551de39b42c6cbe6c83bf33feb68cada48d013c61cfaf53e7dbf569\"" Nov 12 20:56:15.277415 systemd[1]: Started cri-containerd-ee24bc0c6fad82cd155b66b95935df43c16adf5936ecc47f59067e568ee55fd7.scope - libcontainer container ee24bc0c6fad82cd155b66b95935df43c16adf5936ecc47f59067e568ee55fd7. Nov 12 20:56:15.314510 systemd[1]: Started cri-containerd-9dc9206ce29b48c95267c182686f914c8ac52db4c688af4747d1fecc808d8398.scope - libcontainer container 9dc9206ce29b48c95267c182686f914c8ac52db4c688af4747d1fecc808d8398. Nov 12 20:56:15.322388 systemd[1]: Started cri-containerd-5b6d7663e551de39b42c6cbe6c83bf33feb68cada48d013c61cfaf53e7dbf569.scope - libcontainer container 5b6d7663e551de39b42c6cbe6c83bf33feb68cada48d013c61cfaf53e7dbf569. Nov 12 20:56:15.392547 containerd[1712]: time="2024-11-12T20:56:15.392494283Z" level=info msg="StartContainer for \"ee24bc0c6fad82cd155b66b95935df43c16adf5936ecc47f59067e568ee55fd7\" returns successfully" Nov 12 20:56:15.404103 containerd[1712]: time="2024-11-12T20:56:15.404062545Z" level=info msg="StartContainer for \"5b6d7663e551de39b42c6cbe6c83bf33feb68cada48d013c61cfaf53e7dbf569\" returns successfully" Nov 12 20:56:15.440838 containerd[1712]: time="2024-11-12T20:56:15.440784857Z" level=info msg="StartContainer for \"9dc9206ce29b48c95267c182686f914c8ac52db4c688af4747d1fecc808d8398\" returns successfully" Nov 12 20:56:16.687513 kubelet[2871]: I1112 20:56:16.687420 2871 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:17.382833 kubelet[2871]: E1112 20:56:17.382772 2871 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.0-a-d8aa37ea01\" not found" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:17.827374 kubelet[2871]: I1112 20:56:17.826604 2871 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:18.823911 kubelet[2871]: I1112 20:56:18.823857 2871 apiserver.go:52] "Watching apiserver" Nov 12 20:56:18.874366 kubelet[2871]: I1112 20:56:18.873790 2871 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:56:20.348226 kubelet[2871]: W1112 20:56:20.348127 2871 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:56:23.582425 kubelet[2871]: W1112 20:56:22.076338 2871 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:56:23.602618 systemd[1]: Reloading requested from client PID 3143 ('systemctl') (unit session-9.scope)... Nov 12 20:56:23.602633 systemd[1]: Reloading... Nov 12 20:56:23.648865 kubelet[2871]: I1112 20:56:23.648681 2871 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" podStartSLOduration=3.64859069 podStartE2EDuration="3.64859069s" podCreationTimestamp="2024-11-12 20:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:23.647278973 +0000 UTC m=+10.710531753" watchObservedRunningTime="2024-11-12 20:56:23.64859069 +0000 UTC m=+10.711843470" Nov 12 20:56:23.726357 zram_generator::config[3188]: No configuration found. Nov 12 20:56:23.848750 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:56:23.941588 systemd[1]: Reloading finished in 338 ms. Nov 12 20:56:23.983113 kubelet[2871]: I1112 20:56:23.982959 2871 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:23.983226 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:23.995897 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:56:23.996178 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:24.001783 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:56:24.918337 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:56:24.919471 (kubelet)[3255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:56:25.103004 kubelet[3255]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:25.103004 kubelet[3255]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:56:25.103004 kubelet[3255]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:56:25.103554 kubelet[3255]: I1112 20:56:25.103094 3255 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:56:25.107680 kubelet[3255]: I1112 20:56:25.107649 3255 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Nov 12 20:56:25.107680 kubelet[3255]: I1112 20:56:25.107675 3255 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:56:25.107966 kubelet[3255]: I1112 20:56:25.107943 3255 server.go:919] "Client rotation is on, will bootstrap in background" Nov 12 20:56:25.109273 kubelet[3255]: I1112 20:56:25.109244 3255 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:56:25.111274 kubelet[3255]: I1112 20:56:25.111120 3255 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:56:25.120327 kubelet[3255]: I1112 20:56:25.120302 3255 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:56:25.120594 kubelet[3255]: I1112 20:56:25.120574 3255 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:56:25.120773 kubelet[3255]: I1112 20:56:25.120740 3255 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:56:25.120773 kubelet[3255]: I1112 20:56:25.120770 3255 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:56:25.120945 kubelet[3255]: I1112 20:56:25.120783 3255 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:56:25.120945 kubelet[3255]: I1112 20:56:25.120818 3255 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:25.120945 kubelet[3255]: I1112 20:56:25.120921 3255 kubelet.go:396] "Attempting to sync node with API server" Nov 12 20:56:25.120945 kubelet[3255]: I1112 20:56:25.120937 3255 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:56:25.122184 kubelet[3255]: I1112 20:56:25.120965 3255 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:56:25.122184 kubelet[3255]: I1112 20:56:25.120983 3255 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:56:25.122532 kubelet[3255]: I1112 20:56:25.122513 3255 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:56:25.122724 kubelet[3255]: I1112 20:56:25.122707 3255 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:56:25.123152 kubelet[3255]: I1112 20:56:25.123130 3255 server.go:1256] "Started kubelet" Nov 12 20:56:25.128465 kubelet[3255]: I1112 20:56:25.128031 3255 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:56:25.135688 kubelet[3255]: I1112 20:56:25.135665 3255 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:56:25.136727 kubelet[3255]: I1112 20:56:25.136710 3255 server.go:461] "Adding debug handlers to kubelet server" Nov 12 20:56:25.137954 kubelet[3255]: I1112 20:56:25.137933 3255 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:56:25.138545 kubelet[3255]: I1112 20:56:25.138301 3255 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:56:25.140498 kubelet[3255]: I1112 20:56:25.140482 3255 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:56:25.144440 kubelet[3255]: I1112 20:56:25.143868 3255 reconciler_new.go:29] "Reconciler: start to sync state" Nov 12 20:56:25.144440 kubelet[3255]: I1112 20:56:25.143945 3255 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Nov 12 20:56:25.148742 kubelet[3255]: I1112 20:56:25.148726 3255 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:56:25.149905 kubelet[3255]: I1112 20:56:25.149889 3255 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:56:25.150008 kubelet[3255]: I1112 20:56:25.150000 3255 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:56:25.150075 kubelet[3255]: I1112 20:56:25.150068 3255 kubelet.go:2329] "Starting kubelet main sync loop" Nov 12 20:56:25.150559 kubelet[3255]: E1112 20:56:25.150182 3255 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:56:25.150653 kubelet[3255]: I1112 20:56:25.150460 3255 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:56:25.150816 kubelet[3255]: I1112 20:56:25.150795 3255 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:56:25.160642 kubelet[3255]: E1112 20:56:25.159981 3255 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Nov 12 20:56:25.160724 kubelet[3255]: I1112 20:56:25.160645 3255 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:56:25.215160 kubelet[3255]: I1112 20:56:25.215133 3255 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:56:25.215160 kubelet[3255]: I1112 20:56:25.215154 3255 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:56:25.215160 kubelet[3255]: I1112 20:56:25.215174 3255 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:56:25.215423 kubelet[3255]: I1112 20:56:25.215388 3255 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:56:25.215423 kubelet[3255]: I1112 20:56:25.215414 3255 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:56:25.215423 kubelet[3255]: I1112 20:56:25.215424 3255 policy_none.go:49] "None policy: Start" Nov 12 20:56:25.216235 kubelet[3255]: I1112 20:56:25.216066 3255 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:56:25.216235 kubelet[3255]: I1112 20:56:25.216107 3255 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:56:25.216389 kubelet[3255]: I1112 20:56:25.216349 3255 state_mem.go:75] "Updated machine memory state" Nov 12 20:56:25.220270 kubelet[3255]: I1112 20:56:25.220245 3255 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:56:25.220738 kubelet[3255]: I1112 20:56:25.220465 3255 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:56:25.244530 kubelet[3255]: I1112 20:56:25.244503 3255 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.251316 kubelet[3255]: I1112 20:56:25.251288 3255 topology_manager.go:215] "Topology Admit Handler" podUID="05f281f3f4f0545c56ad1b1b5d9fda48" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.251409 kubelet[3255]: I1112 20:56:25.251381 3255 topology_manager.go:215] "Topology Admit Handler" podUID="acdf0f90a675c8a64be57c5e58a8f86c" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.251458 kubelet[3255]: I1112 20:56:25.251430 3255 topology_manager.go:215] "Topology Admit Handler" podUID="59177b775832564b4d81ef77bebda8ff" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.255385 kubelet[3255]: I1112 20:56:25.255365 3255 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.255646 kubelet[3255]: I1112 20:56:25.255533 3255 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.256849 kubelet[3255]: W1112 20:56:25.256100 3255 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:56:25.262118 kubelet[3255]: W1112 20:56:25.262088 3255 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:56:25.262225 kubelet[3255]: E1112 20:56:25.262208 3255 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" already exists" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.265339 kubelet[3255]: W1112 20:56:25.265318 3255 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:56:25.265420 kubelet[3255]: E1112 20:56:25.265370 3255 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.444938 kubelet[3255]: I1112 20:56:25.444825 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.444938 kubelet[3255]: I1112 20:56:25.444912 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.444938 kubelet[3255]: I1112 20:56:25.444949 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/59177b775832564b4d81ef77bebda8ff-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-d8aa37ea01\" (UID: \"59177b775832564b4d81ef77bebda8ff\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.445423 kubelet[3255]: I1112 20:56:25.444989 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/05f281f3f4f0545c56ad1b1b5d9fda48-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" (UID: \"05f281f3f4f0545c56ad1b1b5d9fda48\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.445423 kubelet[3255]: I1112 20:56:25.445075 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/05f281f3f4f0545c56ad1b1b5d9fda48-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" (UID: \"05f281f3f4f0545c56ad1b1b5d9fda48\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.445423 kubelet[3255]: I1112 20:56:25.445141 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/05f281f3f4f0545c56ad1b1b5d9fda48-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" (UID: \"05f281f3f4f0545c56ad1b1b5d9fda48\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.445423 kubelet[3255]: I1112 20:56:25.445183 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.445423 kubelet[3255]: I1112 20:56:25.445244 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:25.445565 kubelet[3255]: I1112 20:56:25.445298 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/acdf0f90a675c8a64be57c5e58a8f86c-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-d8aa37ea01\" (UID: \"acdf0f90a675c8a64be57c5e58a8f86c\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:26.121671 kubelet[3255]: I1112 20:56:26.121571 3255 apiserver.go:52] "Watching apiserver" Nov 12 20:56:28.178888 kubelet[3255]: I1112 20:56:26.144790 3255 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Nov 12 20:56:28.178888 kubelet[3255]: W1112 20:56:26.212878 3255 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:56:28.178888 kubelet[3255]: E1112 20:56:26.212940 3255 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-a-d8aa37ea01\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-a-d8aa37ea01" Nov 12 20:56:28.178888 kubelet[3255]: I1112 20:56:26.222591 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-a-d8aa37ea01" podStartSLOduration=1.2225112870000001 podStartE2EDuration="1.222511287s" podCreationTimestamp="2024-11-12 20:56:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:26.212920659 +0000 UTC m=+1.287621758" watchObservedRunningTime="2024-11-12 20:56:26.222511287 +0000 UTC m=+1.297212386" Nov 12 20:56:32.898648 sudo[2236]: pam_unix(sudo:session): session closed for user root Nov 12 20:56:32.998668 sshd[2233]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:33.003609 systemd[1]: sshd@6-10.200.8.15:22-10.200.16.10:32836.service: Deactivated successfully. Nov 12 20:56:33.005762 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:56:33.005978 systemd[1]: session-9.scope: Consumed 4.976s CPU time, 190.8M memory peak, 0B memory swap peak. Nov 12 20:56:33.006603 systemd-logind[1689]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:56:33.007559 systemd-logind[1689]: Removed session 9. Nov 12 20:56:34.490459 kubelet[3255]: I1112 20:56:34.489860 3255 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:56:34.490924 containerd[1712]: time="2024-11-12T20:56:34.490351911Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:56:34.491540 kubelet[3255]: I1112 20:56:34.491488 3255 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:56:35.107078 kubelet[3255]: I1112 20:56:35.107028 3255 topology_manager.go:215] "Topology Admit Handler" podUID="569550a8-bca0-4fc3-8b7f-ad95ae90a555" podNamespace="kube-system" podName="kube-proxy-fjs4d" Nov 12 20:56:35.120868 systemd[1]: Created slice kubepods-besteffort-pod569550a8_bca0_4fc3_8b7f_ad95ae90a555.slice - libcontainer container kubepods-besteffort-pod569550a8_bca0_4fc3_8b7f_ad95ae90a555.slice. Nov 12 20:56:35.207912 kubelet[3255]: I1112 20:56:35.207825 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/569550a8-bca0-4fc3-8b7f-ad95ae90a555-lib-modules\") pod \"kube-proxy-fjs4d\" (UID: \"569550a8-bca0-4fc3-8b7f-ad95ae90a555\") " pod="kube-system/kube-proxy-fjs4d" Nov 12 20:56:35.207912 kubelet[3255]: I1112 20:56:35.207885 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/569550a8-bca0-4fc3-8b7f-ad95ae90a555-kube-proxy\") pod \"kube-proxy-fjs4d\" (UID: \"569550a8-bca0-4fc3-8b7f-ad95ae90a555\") " pod="kube-system/kube-proxy-fjs4d" Nov 12 20:56:35.207912 kubelet[3255]: I1112 20:56:35.207919 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/569550a8-bca0-4fc3-8b7f-ad95ae90a555-xtables-lock\") pod \"kube-proxy-fjs4d\" (UID: \"569550a8-bca0-4fc3-8b7f-ad95ae90a555\") " pod="kube-system/kube-proxy-fjs4d" Nov 12 20:56:35.208246 kubelet[3255]: I1112 20:56:35.207953 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hbd7d\" (UniqueName: \"kubernetes.io/projected/569550a8-bca0-4fc3-8b7f-ad95ae90a555-kube-api-access-hbd7d\") pod \"kube-proxy-fjs4d\" (UID: \"569550a8-bca0-4fc3-8b7f-ad95ae90a555\") " pod="kube-system/kube-proxy-fjs4d" Nov 12 20:56:35.428943 containerd[1712]: time="2024-11-12T20:56:35.428829827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fjs4d,Uid:569550a8-bca0-4fc3-8b7f-ad95ae90a555,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:35.611997 kubelet[3255]: I1112 20:56:35.611958 3255 topology_manager.go:215] "Topology Admit Handler" podUID="4cba50c5-7c39-405d-8ad0-8f6c24ed8356" podNamespace="tigera-operator" podName="tigera-operator-56b74f76df-69nm2" Nov 12 20:56:35.624500 systemd[1]: Created slice kubepods-besteffort-pod4cba50c5_7c39_405d_8ad0_8f6c24ed8356.slice - libcontainer container kubepods-besteffort-pod4cba50c5_7c39_405d_8ad0_8f6c24ed8356.slice. Nov 12 20:56:35.694125 containerd[1712]: time="2024-11-12T20:56:35.693763230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:35.694125 containerd[1712]: time="2024-11-12T20:56:35.693847831Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:35.694125 containerd[1712]: time="2024-11-12T20:56:35.693869931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:35.694863 containerd[1712]: time="2024-11-12T20:56:35.694027933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:35.709799 kubelet[3255]: I1112 20:56:35.709728 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/4cba50c5-7c39-405d-8ad0-8f6c24ed8356-var-lib-calico\") pod \"tigera-operator-56b74f76df-69nm2\" (UID: \"4cba50c5-7c39-405d-8ad0-8f6c24ed8356\") " pod="tigera-operator/tigera-operator-56b74f76df-69nm2" Nov 12 20:56:35.710332 kubelet[3255]: I1112 20:56:35.710009 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5h5d6\" (UniqueName: \"kubernetes.io/projected/4cba50c5-7c39-405d-8ad0-8f6c24ed8356-kube-api-access-5h5d6\") pod \"tigera-operator-56b74f76df-69nm2\" (UID: \"4cba50c5-7c39-405d-8ad0-8f6c24ed8356\") " pod="tigera-operator/tigera-operator-56b74f76df-69nm2" Nov 12 20:56:35.726375 systemd[1]: Started cri-containerd-c1478b97e2c9e65633996e55cb80ca472800b6295eeafca252bedd43ac7532ad.scope - libcontainer container c1478b97e2c9e65633996e55cb80ca472800b6295eeafca252bedd43ac7532ad. Nov 12 20:56:35.748379 containerd[1712]: time="2024-11-12T20:56:35.748340110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fjs4d,Uid:569550a8-bca0-4fc3-8b7f-ad95ae90a555,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1478b97e2c9e65633996e55cb80ca472800b6295eeafca252bedd43ac7532ad\"" Nov 12 20:56:35.751712 containerd[1712]: time="2024-11-12T20:56:35.751634952Z" level=info msg="CreateContainer within sandbox \"c1478b97e2c9e65633996e55cb80ca472800b6295eeafca252bedd43ac7532ad\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:56:35.929107 containerd[1712]: time="2024-11-12T20:56:35.929054164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-69nm2,Uid:4cba50c5-7c39-405d-8ad0-8f6c24ed8356,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:56:36.085792 containerd[1712]: time="2024-11-12T20:56:36.085727317Z" level=info msg="CreateContainer within sandbox \"c1478b97e2c9e65633996e55cb80ca472800b6295eeafca252bedd43ac7532ad\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4647c738095981f6fd023480ae97a1491015a01781a4bfc19d900b4b6b30463\"" Nov 12 20:56:36.086821 containerd[1712]: time="2024-11-12T20:56:36.086782730Z" level=info msg="StartContainer for \"f4647c738095981f6fd023480ae97a1491015a01781a4bfc19d900b4b6b30463\"" Nov 12 20:56:36.113366 systemd[1]: Started cri-containerd-f4647c738095981f6fd023480ae97a1491015a01781a4bfc19d900b4b6b30463.scope - libcontainer container f4647c738095981f6fd023480ae97a1491015a01781a4bfc19d900b4b6b30463. Nov 12 20:56:36.226228 containerd[1712]: time="2024-11-12T20:56:36.225785964Z" level=info msg="StartContainer for \"f4647c738095981f6fd023480ae97a1491015a01781a4bfc19d900b4b6b30463\" returns successfully" Nov 12 20:56:36.353242 containerd[1712]: time="2024-11-12T20:56:36.351894036Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:36.353242 containerd[1712]: time="2024-11-12T20:56:36.351952937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:36.353242 containerd[1712]: time="2024-11-12T20:56:36.351974737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:36.353242 containerd[1712]: time="2024-11-12T20:56:36.352056438Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:36.393380 systemd[1]: Started cri-containerd-049b5a8526f641279f603f3a8523444d44835a38d2c375a3b88f95d529a3e8a6.scope - libcontainer container 049b5a8526f641279f603f3a8523444d44835a38d2c375a3b88f95d529a3e8a6. Nov 12 20:56:36.445992 containerd[1712]: time="2024-11-12T20:56:36.444937996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-56b74f76df-69nm2,Uid:4cba50c5-7c39-405d-8ad0-8f6c24ed8356,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"049b5a8526f641279f603f3a8523444d44835a38d2c375a3b88f95d529a3e8a6\"" Nov 12 20:56:36.447546 containerd[1712]: time="2024-11-12T20:56:36.447515728Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:56:38.488085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount253262208.mount: Deactivated successfully. Nov 12 20:56:39.073982 containerd[1712]: time="2024-11-12T20:56:39.073926777Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:39.075775 containerd[1712]: time="2024-11-12T20:56:39.075717899Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763375" Nov 12 20:56:39.079318 containerd[1712]: time="2024-11-12T20:56:39.079264343Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:39.085112 containerd[1712]: time="2024-11-12T20:56:39.085062716Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:39.086371 containerd[1712]: time="2024-11-12T20:56:39.085762224Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 2.637870091s" Nov 12 20:56:39.086371 containerd[1712]: time="2024-11-12T20:56:39.085800725Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:56:39.087969 containerd[1712]: time="2024-11-12T20:56:39.087905251Z" level=info msg="CreateContainer within sandbox \"049b5a8526f641279f603f3a8523444d44835a38d2c375a3b88f95d529a3e8a6\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:56:39.123355 containerd[1712]: time="2024-11-12T20:56:39.123315893Z" level=info msg="CreateContainer within sandbox \"049b5a8526f641279f603f3a8523444d44835a38d2c375a3b88f95d529a3e8a6\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b5e29e5675161cb1d9a9a8afec370f5089dbda590f935c6ac7d604387b19f5b9\"" Nov 12 20:56:39.124210 containerd[1712]: time="2024-11-12T20:56:39.123901300Z" level=info msg="StartContainer for \"b5e29e5675161cb1d9a9a8afec370f5089dbda590f935c6ac7d604387b19f5b9\"" Nov 12 20:56:39.151490 systemd[1]: Started cri-containerd-b5e29e5675161cb1d9a9a8afec370f5089dbda590f935c6ac7d604387b19f5b9.scope - libcontainer container b5e29e5675161cb1d9a9a8afec370f5089dbda590f935c6ac7d604387b19f5b9. Nov 12 20:56:39.178322 containerd[1712]: time="2024-11-12T20:56:39.178285578Z" level=info msg="StartContainer for \"b5e29e5675161cb1d9a9a8afec370f5089dbda590f935c6ac7d604387b19f5b9\" returns successfully" Nov 12 20:56:39.252644 kubelet[3255]: I1112 20:56:39.252267 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fjs4d" podStartSLOduration=4.2522174 podStartE2EDuration="4.2522174s" podCreationTimestamp="2024-11-12 20:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:56:37.243818257 +0000 UTC m=+12.318519456" watchObservedRunningTime="2024-11-12 20:56:39.2522174 +0000 UTC m=+14.326918599" Nov 12 20:56:42.228782 kubelet[3255]: I1112 20:56:42.228731 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-56b74f76df-69nm2" podStartSLOduration=4.589274702 podStartE2EDuration="7.228672713s" podCreationTimestamp="2024-11-12 20:56:35 +0000 UTC" firstStartedPulling="2024-11-12 20:56:36.446702518 +0000 UTC m=+11.521403617" lastFinishedPulling="2024-11-12 20:56:39.086100429 +0000 UTC m=+14.160801628" observedRunningTime="2024-11-12 20:56:39.252549204 +0000 UTC m=+14.327250303" watchObservedRunningTime="2024-11-12 20:56:42.228672713 +0000 UTC m=+17.303373812" Nov 12 20:56:42.230275 kubelet[3255]: I1112 20:56:42.229104 3255 topology_manager.go:215] "Topology Admit Handler" podUID="771ead19-287f-46cc-81d7-29fdc70af212" podNamespace="calico-system" podName="calico-typha-6c5779f8c8-hk4jr" Nov 12 20:56:42.240447 systemd[1]: Created slice kubepods-besteffort-pod771ead19_287f_46cc_81d7_29fdc70af212.slice - libcontainer container kubepods-besteffort-pod771ead19_287f_46cc_81d7_29fdc70af212.slice. Nov 12 20:56:42.247272 kubelet[3255]: I1112 20:56:42.247244 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/771ead19-287f-46cc-81d7-29fdc70af212-tigera-ca-bundle\") pod \"calico-typha-6c5779f8c8-hk4jr\" (UID: \"771ead19-287f-46cc-81d7-29fdc70af212\") " pod="calico-system/calico-typha-6c5779f8c8-hk4jr" Nov 12 20:56:42.247436 kubelet[3255]: I1112 20:56:42.247291 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vc4np\" (UniqueName: \"kubernetes.io/projected/771ead19-287f-46cc-81d7-29fdc70af212-kube-api-access-vc4np\") pod \"calico-typha-6c5779f8c8-hk4jr\" (UID: \"771ead19-287f-46cc-81d7-29fdc70af212\") " pod="calico-system/calico-typha-6c5779f8c8-hk4jr" Nov 12 20:56:42.247436 kubelet[3255]: I1112 20:56:42.247319 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/771ead19-287f-46cc-81d7-29fdc70af212-typha-certs\") pod \"calico-typha-6c5779f8c8-hk4jr\" (UID: \"771ead19-287f-46cc-81d7-29fdc70af212\") " pod="calico-system/calico-typha-6c5779f8c8-hk4jr" Nov 12 20:56:42.410023 kubelet[3255]: I1112 20:56:42.409875 3255 topology_manager.go:215] "Topology Admit Handler" podUID="fc681a0a-57ae-4416-b7a2-2122535d5d28" podNamespace="calico-system" podName="calico-node-7pcg6" Nov 12 20:56:42.423347 systemd[1]: Created slice kubepods-besteffort-podfc681a0a_57ae_4416_b7a2_2122535d5d28.slice - libcontainer container kubepods-besteffort-podfc681a0a_57ae_4416_b7a2_2122535d5d28.slice. Nov 12 20:56:42.448956 kubelet[3255]: I1112 20:56:42.448178 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-policysync\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.448956 kubelet[3255]: I1112 20:56:42.448241 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-cni-net-dir\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.448956 kubelet[3255]: I1112 20:56:42.448274 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-lib-modules\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.448956 kubelet[3255]: I1112 20:56:42.448305 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-cni-log-dir\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.448956 kubelet[3255]: I1112 20:56:42.448335 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc681a0a-57ae-4416-b7a2-2122535d5d28-tigera-ca-bundle\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449303 kubelet[3255]: I1112 20:56:42.448362 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-flexvol-driver-host\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449303 kubelet[3255]: I1112 20:56:42.448395 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-xtables-lock\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449303 kubelet[3255]: I1112 20:56:42.448420 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-var-lib-calico\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449303 kubelet[3255]: I1112 20:56:42.448452 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc681a0a-57ae-4416-b7a2-2122535d5d28-node-certs\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449303 kubelet[3255]: I1112 20:56:42.448483 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9j6s4\" (UniqueName: \"kubernetes.io/projected/fc681a0a-57ae-4416-b7a2-2122535d5d28-kube-api-access-9j6s4\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449515 kubelet[3255]: I1112 20:56:42.448512 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-cni-bin-dir\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.449515 kubelet[3255]: I1112 20:56:42.448579 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc681a0a-57ae-4416-b7a2-2122535d5d28-var-run-calico\") pod \"calico-node-7pcg6\" (UID: \"fc681a0a-57ae-4416-b7a2-2122535d5d28\") " pod="calico-system/calico-node-7pcg6" Nov 12 20:56:42.555227 kubelet[3255]: E1112 20:56:42.553469 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.555227 kubelet[3255]: W1112 20:56:42.553499 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.555227 kubelet[3255]: E1112 20:56:42.553532 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.557171 containerd[1712]: time="2024-11-12T20:56:42.555697891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c5779f8c8-hk4jr,Uid:771ead19-287f-46cc-81d7-29fdc70af212,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:42.562790 kubelet[3255]: E1112 20:56:42.562465 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.562790 kubelet[3255]: W1112 20:56:42.562485 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.562790 kubelet[3255]: E1112 20:56:42.562509 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.579980 kubelet[3255]: E1112 20:56:42.579958 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.580155 kubelet[3255]: W1112 20:56:42.580099 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.580155 kubelet[3255]: E1112 20:56:42.580126 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.584914 kubelet[3255]: I1112 20:56:42.583803 3255 topology_manager.go:215] "Topology Admit Handler" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" podNamespace="calico-system" podName="csi-node-driver-fq22j" Nov 12 20:56:42.584914 kubelet[3255]: E1112 20:56:42.584232 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:42.638175 containerd[1712]: time="2024-11-12T20:56:42.634644175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:42.638175 containerd[1712]: time="2024-11-12T20:56:42.634710176Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:42.638175 containerd[1712]: time="2024-11-12T20:56:42.634747676Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:42.638175 containerd[1712]: time="2024-11-12T20:56:42.634837478Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:42.650474 kubelet[3255]: E1112 20:56:42.650414 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.650614 kubelet[3255]: W1112 20:56:42.650511 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.651237 kubelet[3255]: E1112 20:56:42.650818 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.651237 kubelet[3255]: E1112 20:56:42.651154 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.651237 kubelet[3255]: W1112 20:56:42.651168 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.651439 kubelet[3255]: E1112 20:56:42.651271 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.651925 kubelet[3255]: E1112 20:56:42.651889 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.651925 kubelet[3255]: W1112 20:56:42.651910 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.652042 kubelet[3255]: E1112 20:56:42.651931 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.653207 kubelet[3255]: E1112 20:56:42.652530 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.653207 kubelet[3255]: W1112 20:56:42.652546 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.653207 kubelet[3255]: E1112 20:56:42.652702 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.654491 kubelet[3255]: E1112 20:56:42.654467 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.654491 kubelet[3255]: W1112 20:56:42.654487 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.654614 kubelet[3255]: E1112 20:56:42.654504 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.655908 kubelet[3255]: E1112 20:56:42.654975 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.655908 kubelet[3255]: W1112 20:56:42.655091 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.655908 kubelet[3255]: E1112 20:56:42.655110 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.655908 kubelet[3255]: E1112 20:56:42.655541 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.655908 kubelet[3255]: W1112 20:56:42.655553 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.655908 kubelet[3255]: E1112 20:56:42.655570 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.656853 kubelet[3255]: E1112 20:56:42.656298 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.656853 kubelet[3255]: W1112 20:56:42.656314 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.656853 kubelet[3255]: E1112 20:56:42.656332 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.657027 kubelet[3255]: E1112 20:56:42.656824 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.657027 kubelet[3255]: W1112 20:56:42.656892 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.657027 kubelet[3255]: E1112 20:56:42.656911 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.657855 kubelet[3255]: E1112 20:56:42.657458 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.657855 kubelet[3255]: W1112 20:56:42.657576 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.657855 kubelet[3255]: E1112 20:56:42.657602 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.658019 kubelet[3255]: E1112 20:56:42.658001 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.658019 kubelet[3255]: W1112 20:56:42.658013 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.658104 kubelet[3255]: E1112 20:56:42.658030 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.659205 kubelet[3255]: E1112 20:56:42.658518 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.659205 kubelet[3255]: W1112 20:56:42.658534 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.659205 kubelet[3255]: E1112 20:56:42.658550 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.659205 kubelet[3255]: E1112 20:56:42.659055 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.659205 kubelet[3255]: W1112 20:56:42.659068 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.659205 kubelet[3255]: E1112 20:56:42.659084 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.659716 kubelet[3255]: E1112 20:56:42.659496 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.659716 kubelet[3255]: W1112 20:56:42.659513 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.659716 kubelet[3255]: E1112 20:56:42.659529 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.660369 kubelet[3255]: E1112 20:56:42.659996 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.660369 kubelet[3255]: W1112 20:56:42.660012 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.660369 kubelet[3255]: E1112 20:56:42.660028 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.661048 kubelet[3255]: E1112 20:56:42.660605 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.661048 kubelet[3255]: W1112 20:56:42.660621 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.661048 kubelet[3255]: E1112 20:56:42.660638 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.661237 kubelet[3255]: E1112 20:56:42.661081 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.661237 kubelet[3255]: W1112 20:56:42.661092 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.661237 kubelet[3255]: E1112 20:56:42.661221 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.662287 kubelet[3255]: E1112 20:56:42.661733 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.662287 kubelet[3255]: W1112 20:56:42.661751 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.662287 kubelet[3255]: E1112 20:56:42.661769 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.662287 kubelet[3255]: E1112 20:56:42.662262 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.662287 kubelet[3255]: W1112 20:56:42.662275 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.662287 kubelet[3255]: E1112 20:56:42.662291 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.663651 kubelet[3255]: E1112 20:56:42.662801 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.663651 kubelet[3255]: W1112 20:56:42.662815 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.663651 kubelet[3255]: E1112 20:56:42.662833 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.679217 systemd[1]: Started cri-containerd-5a0f350a603c331699f5ab550badaa2df7ec611f0a566a960233568fec7c8373.scope - libcontainer container 5a0f350a603c331699f5ab550badaa2df7ec611f0a566a960233568fec7c8373. Nov 12 20:56:42.731970 containerd[1712]: time="2024-11-12T20:56:42.730558471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7pcg6,Uid:fc681a0a-57ae-4416-b7a2-2122535d5d28,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:42.746366 containerd[1712]: time="2024-11-12T20:56:42.746176266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6c5779f8c8-hk4jr,Uid:771ead19-287f-46cc-81d7-29fdc70af212,Namespace:calico-system,Attempt:0,} returns sandbox id \"5a0f350a603c331699f5ab550badaa2df7ec611f0a566a960233568fec7c8373\"" Nov 12 20:56:42.749960 containerd[1712]: time="2024-11-12T20:56:42.749154603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:56:42.750943 kubelet[3255]: E1112 20:56:42.750888 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.750943 kubelet[3255]: W1112 20:56:42.750938 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.751081 kubelet[3255]: E1112 20:56:42.750964 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.751081 kubelet[3255]: I1112 20:56:42.751044 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f56efb82-9a7d-420d-9381-5bbb29af7152-kubelet-dir\") pod \"csi-node-driver-fq22j\" (UID: \"f56efb82-9a7d-420d-9381-5bbb29af7152\") " pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:42.751433 kubelet[3255]: E1112 20:56:42.751366 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.751433 kubelet[3255]: W1112 20:56:42.751384 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.751433 kubelet[3255]: E1112 20:56:42.751430 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.751736 kubelet[3255]: I1112 20:56:42.751585 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bbh8\" (UniqueName: \"kubernetes.io/projected/f56efb82-9a7d-420d-9381-5bbb29af7152-kube-api-access-8bbh8\") pod \"csi-node-driver-fq22j\" (UID: \"f56efb82-9a7d-420d-9381-5bbb29af7152\") " pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:42.752321 kubelet[3255]: E1112 20:56:42.752242 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.752321 kubelet[3255]: W1112 20:56:42.752259 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.752321 kubelet[3255]: E1112 20:56:42.752281 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.752804 kubelet[3255]: E1112 20:56:42.752745 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.752804 kubelet[3255]: W1112 20:56:42.752756 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.752804 kubelet[3255]: E1112 20:56:42.752778 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.753038 kubelet[3255]: E1112 20:56:42.752970 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.753038 kubelet[3255]: W1112 20:56:42.752980 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.753140 kubelet[3255]: E1112 20:56:42.753109 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.753140 kubelet[3255]: I1112 20:56:42.753141 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f56efb82-9a7d-420d-9381-5bbb29af7152-registration-dir\") pod \"csi-node-driver-fq22j\" (UID: \"f56efb82-9a7d-420d-9381-5bbb29af7152\") " pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:42.753371 kubelet[3255]: E1112 20:56:42.753349 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.753447 kubelet[3255]: W1112 20:56:42.753366 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.753447 kubelet[3255]: E1112 20:56:42.753398 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.753719 kubelet[3255]: E1112 20:56:42.753619 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.753719 kubelet[3255]: W1112 20:56:42.753629 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.753719 kubelet[3255]: E1112 20:56:42.753648 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.753941 kubelet[3255]: E1112 20:56:42.753894 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.753941 kubelet[3255]: W1112 20:56:42.753903 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.753941 kubelet[3255]: E1112 20:56:42.753922 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.754165 kubelet[3255]: I1112 20:56:42.754086 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f56efb82-9a7d-420d-9381-5bbb29af7152-varrun\") pod \"csi-node-driver-fq22j\" (UID: \"f56efb82-9a7d-420d-9381-5bbb29af7152\") " pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:42.754274 kubelet[3255]: E1112 20:56:42.754217 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.754274 kubelet[3255]: W1112 20:56:42.754230 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.754274 kubelet[3255]: E1112 20:56:42.754259 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.754569 kubelet[3255]: E1112 20:56:42.754492 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.754569 kubelet[3255]: W1112 20:56:42.754505 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.754569 kubelet[3255]: E1112 20:56:42.754534 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.754742 kubelet[3255]: E1112 20:56:42.754722 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.754742 kubelet[3255]: W1112 20:56:42.754737 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.754850 kubelet[3255]: E1112 20:56:42.754766 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.754850 kubelet[3255]: I1112 20:56:42.754793 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f56efb82-9a7d-420d-9381-5bbb29af7152-socket-dir\") pod \"csi-node-driver-fq22j\" (UID: \"f56efb82-9a7d-420d-9381-5bbb29af7152\") " pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.755161 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.757279 kubelet[3255]: W1112 20:56:42.755175 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.755233 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.755466 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.757279 kubelet[3255]: W1112 20:56:42.755478 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.755506 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.755784 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.757279 kubelet[3255]: W1112 20:56:42.755797 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.755813 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.757279 kubelet[3255]: E1112 20:56:42.756010 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.757561 kubelet[3255]: W1112 20:56:42.756021 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.757561 kubelet[3255]: E1112 20:56:42.756037 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.796676 containerd[1712]: time="2024-11-12T20:56:42.796049488Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:56:42.796676 containerd[1712]: time="2024-11-12T20:56:42.796159789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:56:42.796676 containerd[1712]: time="2024-11-12T20:56:42.796232090Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:42.796676 containerd[1712]: time="2024-11-12T20:56:42.796332291Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:56:42.826541 systemd[1]: Started cri-containerd-b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c.scope - libcontainer container b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c. Nov 12 20:56:42.856704 kubelet[3255]: E1112 20:56:42.856674 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.856850 kubelet[3255]: W1112 20:56:42.856700 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.856850 kubelet[3255]: E1112 20:56:42.856839 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.857862 kubelet[3255]: E1112 20:56:42.857835 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.858115 kubelet[3255]: W1112 20:56:42.857973 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.858115 kubelet[3255]: E1112 20:56:42.858005 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.858844 kubelet[3255]: E1112 20:56:42.858825 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.858844 kubelet[3255]: W1112 20:56:42.858843 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.858980 kubelet[3255]: E1112 20:56:42.858864 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.859135 kubelet[3255]: E1112 20:56:42.859120 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.859210 kubelet[3255]: W1112 20:56:42.859135 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.859210 kubelet[3255]: E1112 20:56:42.859161 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.859494 kubelet[3255]: E1112 20:56:42.859414 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.859494 kubelet[3255]: W1112 20:56:42.859440 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.859494 kubelet[3255]: E1112 20:56:42.859464 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.860212 kubelet[3255]: E1112 20:56:42.860167 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.860212 kubelet[3255]: W1112 20:56:42.860196 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.860500 kubelet[3255]: E1112 20:56:42.860477 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.861339 kubelet[3255]: E1112 20:56:42.860887 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.861339 kubelet[3255]: W1112 20:56:42.860902 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.861339 kubelet[3255]: E1112 20:56:42.861246 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.861702 kubelet[3255]: E1112 20:56:42.861682 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.861702 kubelet[3255]: W1112 20:56:42.861697 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.862068 kubelet[3255]: E1112 20:56:42.861808 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.862284 containerd[1712]: time="2024-11-12T20:56:42.861892809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7pcg6,Uid:fc681a0a-57ae-4416-b7a2-2122535d5d28,Namespace:calico-system,Attempt:0,} returns sandbox id \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\"" Nov 12 20:56:42.862879 kubelet[3255]: E1112 20:56:42.862733 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.862879 kubelet[3255]: W1112 20:56:42.862749 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.862879 kubelet[3255]: E1112 20:56:42.862766 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.863740 kubelet[3255]: E1112 20:56:42.863709 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.863740 kubelet[3255]: W1112 20:56:42.863729 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.864468 kubelet[3255]: E1112 20:56:42.863746 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.864468 kubelet[3255]: E1112 20:56:42.864270 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.864468 kubelet[3255]: W1112 20:56:42.864389 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.864932 kubelet[3255]: E1112 20:56:42.864860 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.865344 kubelet[3255]: E1112 20:56:42.865090 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.865344 kubelet[3255]: W1112 20:56:42.865103 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.865344 kubelet[3255]: E1112 20:56:42.865224 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.865618 kubelet[3255]: E1112 20:56:42.865470 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.865618 kubelet[3255]: W1112 20:56:42.865482 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.865618 kubelet[3255]: E1112 20:56:42.865599 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.866473 kubelet[3255]: E1112 20:56:42.866148 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.866473 kubelet[3255]: W1112 20:56:42.866162 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.866473 kubelet[3255]: E1112 20:56:42.866182 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.866639 kubelet[3255]: E1112 20:56:42.866487 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.866639 kubelet[3255]: W1112 20:56:42.866498 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.866639 kubelet[3255]: E1112 20:56:42.866588 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.867231 kubelet[3255]: E1112 20:56:42.866831 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.867231 kubelet[3255]: W1112 20:56:42.866842 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.867231 kubelet[3255]: E1112 20:56:42.866929 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.867231 kubelet[3255]: E1112 20:56:42.867083 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.867231 kubelet[3255]: W1112 20:56:42.867094 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.867231 kubelet[3255]: E1112 20:56:42.867179 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.868352 kubelet[3255]: E1112 20:56:42.867755 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.868352 kubelet[3255]: W1112 20:56:42.867770 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.868352 kubelet[3255]: E1112 20:56:42.867792 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.868352 kubelet[3255]: E1112 20:56:42.868090 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.868352 kubelet[3255]: W1112 20:56:42.868101 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.868352 kubelet[3255]: E1112 20:56:42.868120 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.869292 kubelet[3255]: E1112 20:56:42.868365 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.869292 kubelet[3255]: W1112 20:56:42.868376 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.869292 kubelet[3255]: E1112 20:56:42.868392 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.869292 kubelet[3255]: E1112 20:56:42.868761 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.869292 kubelet[3255]: W1112 20:56:42.868772 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.869292 kubelet[3255]: E1112 20:56:42.868794 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.869292 kubelet[3255]: E1112 20:56:42.869063 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.869292 kubelet[3255]: W1112 20:56:42.869074 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.869292 kubelet[3255]: E1112 20:56:42.869090 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.870126 kubelet[3255]: E1112 20:56:42.870016 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.870126 kubelet[3255]: W1112 20:56:42.870033 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.870578 kubelet[3255]: E1112 20:56:42.870260 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.870787 kubelet[3255]: E1112 20:56:42.870758 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.870787 kubelet[3255]: W1112 20:56:42.870780 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.870896 kubelet[3255]: E1112 20:56:42.870799 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.871715 kubelet[3255]: E1112 20:56:42.871695 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.871715 kubelet[3255]: W1112 20:56:42.871713 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.871851 kubelet[3255]: E1112 20:56:42.871730 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:42.878018 kubelet[3255]: E1112 20:56:42.877745 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:42.878018 kubelet[3255]: W1112 20:56:42.877764 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:42.878018 kubelet[3255]: E1112 20:56:42.877783 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:44.150706 kubelet[3255]: E1112 20:56:44.150655 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:44.771204 containerd[1712]: time="2024-11-12T20:56:44.771157274Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:44.773340 containerd[1712]: time="2024-11-12T20:56:44.773281699Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:56:44.777989 containerd[1712]: time="2024-11-12T20:56:44.777933155Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:44.784294 containerd[1712]: time="2024-11-12T20:56:44.784104830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:44.785606 containerd[1712]: time="2024-11-12T20:56:44.785044641Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.035851138s" Nov 12 20:56:44.785606 containerd[1712]: time="2024-11-12T20:56:44.785078042Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:56:44.786298 containerd[1712]: time="2024-11-12T20:56:44.786276656Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:56:44.805920 containerd[1712]: time="2024-11-12T20:56:44.805888693Z" level=info msg="CreateContainer within sandbox \"5a0f350a603c331699f5ab550badaa2df7ec611f0a566a960233568fec7c8373\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:56:44.849598 containerd[1712]: time="2024-11-12T20:56:44.849560520Z" level=info msg="CreateContainer within sandbox \"5a0f350a603c331699f5ab550badaa2df7ec611f0a566a960233568fec7c8373\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2172809fa6bb28dcca8969f0931860db54b24fdc5ab25c71f292ac6e870b7b57\"" Nov 12 20:56:44.852250 containerd[1712]: time="2024-11-12T20:56:44.850110326Z" level=info msg="StartContainer for \"2172809fa6bb28dcca8969f0931860db54b24fdc5ab25c71f292ac6e870b7b57\"" Nov 12 20:56:44.880351 systemd[1]: Started cri-containerd-2172809fa6bb28dcca8969f0931860db54b24fdc5ab25c71f292ac6e870b7b57.scope - libcontainer container 2172809fa6bb28dcca8969f0931860db54b24fdc5ab25c71f292ac6e870b7b57. Nov 12 20:56:44.930024 containerd[1712]: time="2024-11-12T20:56:44.929890389Z" level=info msg="StartContainer for \"2172809fa6bb28dcca8969f0931860db54b24fdc5ab25c71f292ac6e870b7b57\" returns successfully" Nov 12 20:56:45.279420 kubelet[3255]: E1112 20:56:45.279395 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.279960 kubelet[3255]: W1112 20:56:45.279472 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.279960 kubelet[3255]: E1112 20:56:45.279501 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.279960 kubelet[3255]: E1112 20:56:45.279744 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.279960 kubelet[3255]: W1112 20:56:45.279757 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.279960 kubelet[3255]: E1112 20:56:45.279799 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.280236 kubelet[3255]: E1112 20:56:45.280022 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.280236 kubelet[3255]: W1112 20:56:45.280033 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.280236 kubelet[3255]: E1112 20:56:45.280051 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.280387 kubelet[3255]: E1112 20:56:45.280266 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.280387 kubelet[3255]: W1112 20:56:45.280276 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.280387 kubelet[3255]: E1112 20:56:45.280293 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.280523 kubelet[3255]: E1112 20:56:45.280492 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.280523 kubelet[3255]: W1112 20:56:45.280503 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.280523 kubelet[3255]: E1112 20:56:45.280518 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.280710 kubelet[3255]: E1112 20:56:45.280693 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.280710 kubelet[3255]: W1112 20:56:45.280708 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.280834 kubelet[3255]: E1112 20:56:45.280723 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.280928 kubelet[3255]: E1112 20:56:45.280909 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.280928 kubelet[3255]: W1112 20:56:45.280922 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.281050 kubelet[3255]: E1112 20:56:45.280937 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.281141 kubelet[3255]: E1112 20:56:45.281122 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.281141 kubelet[3255]: W1112 20:56:45.281136 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.281255 kubelet[3255]: E1112 20:56:45.281150 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.281386 kubelet[3255]: E1112 20:56:45.281365 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.281386 kubelet[3255]: W1112 20:56:45.281379 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.281386 kubelet[3255]: E1112 20:56:45.281394 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.281669 kubelet[3255]: E1112 20:56:45.281581 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.281669 kubelet[3255]: W1112 20:56:45.281591 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.281669 kubelet[3255]: E1112 20:56:45.281602 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.281844 kubelet[3255]: E1112 20:56:45.281833 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.281891 kubelet[3255]: W1112 20:56:45.281845 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.281891 kubelet[3255]: E1112 20:56:45.281860 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.282098 kubelet[3255]: E1112 20:56:45.282082 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.282098 kubelet[3255]: W1112 20:56:45.282095 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.282225 kubelet[3255]: E1112 20:56:45.282124 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.282367 kubelet[3255]: E1112 20:56:45.282349 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.282367 kubelet[3255]: W1112 20:56:45.282362 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.282485 kubelet[3255]: E1112 20:56:45.282379 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.282587 kubelet[3255]: E1112 20:56:45.282573 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.282587 kubelet[3255]: W1112 20:56:45.282583 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.282677 kubelet[3255]: E1112 20:56:45.282599 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.282797 kubelet[3255]: E1112 20:56:45.282777 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.282797 kubelet[3255]: W1112 20:56:45.282790 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.282909 kubelet[3255]: E1112 20:56:45.282806 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.380023 kubelet[3255]: E1112 20:56:45.379990 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.380023 kubelet[3255]: W1112 20:56:45.380008 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.380023 kubelet[3255]: E1112 20:56:45.380030 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.380409 kubelet[3255]: E1112 20:56:45.380319 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.380409 kubelet[3255]: W1112 20:56:45.380330 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.380409 kubelet[3255]: E1112 20:56:45.380361 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.380643 kubelet[3255]: E1112 20:56:45.380622 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.380643 kubelet[3255]: W1112 20:56:45.380639 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.380870 kubelet[3255]: E1112 20:56:45.380664 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.380946 kubelet[3255]: E1112 20:56:45.380892 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.380946 kubelet[3255]: W1112 20:56:45.380903 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.380946 kubelet[3255]: E1112 20:56:45.380930 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.381250 kubelet[3255]: E1112 20:56:45.381234 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.381250 kubelet[3255]: W1112 20:56:45.381248 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.381276 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.381489 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.383932 kubelet[3255]: W1112 20:56:45.381498 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.381516 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.381793 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.383932 kubelet[3255]: W1112 20:56:45.381810 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.381839 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.382121 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.383932 kubelet[3255]: W1112 20:56:45.382130 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.383932 kubelet[3255]: E1112 20:56:45.382159 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.382361 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384233 kubelet[3255]: W1112 20:56:45.382370 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.382405 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.382609 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384233 kubelet[3255]: W1112 20:56:45.382618 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.382643 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.382821 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384233 kubelet[3255]: W1112 20:56:45.382828 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.382845 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384233 kubelet[3255]: E1112 20:56:45.383043 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384471 kubelet[3255]: W1112 20:56:45.383052 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384471 kubelet[3255]: E1112 20:56:45.383069 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384471 kubelet[3255]: E1112 20:56:45.383312 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384471 kubelet[3255]: W1112 20:56:45.383321 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384471 kubelet[3255]: E1112 20:56:45.383339 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384471 kubelet[3255]: E1112 20:56:45.383619 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384471 kubelet[3255]: W1112 20:56:45.383630 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384471 kubelet[3255]: E1112 20:56:45.383671 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.384717 kubelet[3255]: E1112 20:56:45.384700 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.384717 kubelet[3255]: W1112 20:56:45.384713 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.384830 kubelet[3255]: E1112 20:56:45.384735 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.385059 kubelet[3255]: E1112 20:56:45.385042 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.385059 kubelet[3255]: W1112 20:56:45.385055 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.385200 kubelet[3255]: E1112 20:56:45.385146 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.385655 kubelet[3255]: E1112 20:56:45.385635 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.385655 kubelet[3255]: W1112 20:56:45.385649 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.386058 kubelet[3255]: E1112 20:56:45.385672 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:45.386058 kubelet[3255]: E1112 20:56:45.385897 3255 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:56:45.386058 kubelet[3255]: W1112 20:56:45.385908 3255 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:56:45.386058 kubelet[3255]: E1112 20:56:45.385924 3255 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:56:46.099609 containerd[1712]: time="2024-11-12T20:56:46.099565403Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:46.101655 containerd[1712]: time="2024-11-12T20:56:46.101596928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:56:46.108203 containerd[1712]: time="2024-11-12T20:56:46.106735590Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:46.114441 containerd[1712]: time="2024-11-12T20:56:46.114411782Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:46.115102 containerd[1712]: time="2024-11-12T20:56:46.115065690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.328650332s" Nov 12 20:56:46.115546 containerd[1712]: time="2024-11-12T20:56:46.115106091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:56:46.120329 containerd[1712]: time="2024-11-12T20:56:46.120288653Z" level=info msg="CreateContainer within sandbox \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:56:46.151025 kubelet[3255]: E1112 20:56:46.150987 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:46.165899 containerd[1712]: time="2024-11-12T20:56:46.165855603Z" level=info msg="CreateContainer within sandbox \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb\"" Nov 12 20:56:46.166460 containerd[1712]: time="2024-11-12T20:56:46.166358109Z" level=info msg="StartContainer for \"6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb\"" Nov 12 20:56:46.203380 systemd[1]: Started cri-containerd-6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb.scope - libcontainer container 6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb. Nov 12 20:56:46.234638 containerd[1712]: time="2024-11-12T20:56:46.234462131Z" level=info msg="StartContainer for \"6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb\" returns successfully" Nov 12 20:56:46.253901 systemd[1]: cri-containerd-6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb.scope: Deactivated successfully. Nov 12 20:56:46.264707 kubelet[3255]: I1112 20:56:46.264463 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:46.291657 kubelet[3255]: I1112 20:56:46.289984 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-6c5779f8c8-hk4jr" podStartSLOduration=2.252944248 podStartE2EDuration="4.2899264s" podCreationTimestamp="2024-11-12 20:56:42 +0000 UTC" firstStartedPulling="2024-11-12 20:56:42.748395693 +0000 UTC m=+17.823096792" lastFinishedPulling="2024-11-12 20:56:44.785377845 +0000 UTC m=+19.860078944" observedRunningTime="2024-11-12 20:56:45.276100167 +0000 UTC m=+20.350801266" watchObservedRunningTime="2024-11-12 20:56:46.2899264 +0000 UTC m=+21.364627499" Nov 12 20:56:46.291081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb-rootfs.mount: Deactivated successfully. Nov 12 20:56:47.555366 containerd[1712]: time="2024-11-12T20:56:47.555292869Z" level=info msg="shim disconnected" id=6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb namespace=k8s.io Nov 12 20:56:47.555366 containerd[1712]: time="2024-11-12T20:56:47.555360170Z" level=warning msg="cleaning up after shim disconnected" id=6b373724e7c05f4493f0e2a6becc396787534ee3835eaf3c268912c7623a0fcb namespace=k8s.io Nov 12 20:56:47.555366 containerd[1712]: time="2024-11-12T20:56:47.555372870Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:48.150776 kubelet[3255]: E1112 20:56:48.150706 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:48.270669 containerd[1712]: time="2024-11-12T20:56:48.270612100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:56:50.150805 kubelet[3255]: E1112 20:56:50.150430 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:52.151361 kubelet[3255]: E1112 20:56:52.151305 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:52.192438 containerd[1712]: time="2024-11-12T20:56:52.192389663Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:52.194546 containerd[1712]: time="2024-11-12T20:56:52.194476191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:56:52.199866 containerd[1712]: time="2024-11-12T20:56:52.199808360Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:52.205066 containerd[1712]: time="2024-11-12T20:56:52.205003427Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:56:52.205846 containerd[1712]: time="2024-11-12T20:56:52.205711536Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 3.935040335s" Nov 12 20:56:52.205846 containerd[1712]: time="2024-11-12T20:56:52.205749737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:56:52.208412 containerd[1712]: time="2024-11-12T20:56:52.208269970Z" level=info msg="CreateContainer within sandbox \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:56:52.253902 containerd[1712]: time="2024-11-12T20:56:52.253852261Z" level=info msg="CreateContainer within sandbox \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b\"" Nov 12 20:56:52.254449 containerd[1712]: time="2024-11-12T20:56:52.254403368Z" level=info msg="StartContainer for \"00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b\"" Nov 12 20:56:52.294440 systemd[1]: Started cri-containerd-00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b.scope - libcontainer container 00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b. Nov 12 20:56:52.327546 containerd[1712]: time="2024-11-12T20:56:52.327507817Z" level=info msg="StartContainer for \"00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b\" returns successfully" Nov 12 20:56:53.736985 containerd[1712]: time="2024-11-12T20:56:53.736917106Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:56:53.738863 systemd[1]: cri-containerd-00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b.scope: Deactivated successfully. Nov 12 20:56:53.761510 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b-rootfs.mount: Deactivated successfully. Nov 12 20:56:53.772212 kubelet[3255]: I1112 20:56:53.771402 3255 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.801644 3255 topology_manager.go:215] "Topology Admit Handler" podUID="1a6a914b-2623-44ba-a104-8006201e1852" podNamespace="kube-system" podName="coredns-76f75df574-ql4bt" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.808202 3255 topology_manager.go:215] "Topology Admit Handler" podUID="31b5c95e-dc8f-4bde-b086-78c0f44c5289" podNamespace="kube-system" podName="coredns-76f75df574-zbhth" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.813625 3255 topology_manager.go:215] "Topology Admit Handler" podUID="5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2" podNamespace="calico-system" podName="calico-kube-controllers-7b77f44dcc-47lhq" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.818995 3255 topology_manager.go:215] "Topology Admit Handler" podUID="c5c951fd-1efc-4251-a9ef-b6d54bc7597b" podNamespace="calico-apiserver" podName="calico-apiserver-d9f56cd6-j2pht" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.819159 3255 topology_manager.go:215] "Topology Admit Handler" podUID="3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482" podNamespace="calico-apiserver" podName="calico-apiserver-d9f56cd6-v2g28" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.950794 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9tjr\" (UniqueName: \"kubernetes.io/projected/31b5c95e-dc8f-4bde-b086-78c0f44c5289-kube-api-access-f9tjr\") pod \"coredns-76f75df574-zbhth\" (UID: \"31b5c95e-dc8f-4bde-b086-78c0f44c5289\") " pod="kube-system/coredns-76f75df574-zbhth" Nov 12 20:56:54.276330 kubelet[3255]: I1112 20:56:53.950928 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cfn2\" (UniqueName: \"kubernetes.io/projected/c5c951fd-1efc-4251-a9ef-b6d54bc7597b-kube-api-access-4cfn2\") pod \"calico-apiserver-d9f56cd6-j2pht\" (UID: \"c5c951fd-1efc-4251-a9ef-b6d54bc7597b\") " pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" Nov 12 20:56:53.817834 systemd[1]: Created slice kubepods-burstable-pod1a6a914b_2623_44ba_a104_8006201e1852.slice - libcontainer container kubepods-burstable-pod1a6a914b_2623_44ba_a104_8006201e1852.slice. Nov 12 20:56:54.276901 kubelet[3255]: I1112 20:56:53.951117 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls9s8\" (UniqueName: \"kubernetes.io/projected/1a6a914b-2623-44ba-a104-8006201e1852-kube-api-access-ls9s8\") pod \"coredns-76f75df574-ql4bt\" (UID: \"1a6a914b-2623-44ba-a104-8006201e1852\") " pod="kube-system/coredns-76f75df574-ql4bt" Nov 12 20:56:54.276901 kubelet[3255]: I1112 20:56:53.951157 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjpjv\" (UniqueName: \"kubernetes.io/projected/5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2-kube-api-access-vjpjv\") pod \"calico-kube-controllers-7b77f44dcc-47lhq\" (UID: \"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2\") " pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" Nov 12 20:56:54.276901 kubelet[3255]: I1112 20:56:53.951177 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pl4z4\" (UniqueName: \"kubernetes.io/projected/3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482-kube-api-access-pl4z4\") pod \"calico-apiserver-d9f56cd6-v2g28\" (UID: \"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482\") " pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" Nov 12 20:56:54.276901 kubelet[3255]: I1112 20:56:53.951223 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1a6a914b-2623-44ba-a104-8006201e1852-config-volume\") pod \"coredns-76f75df574-ql4bt\" (UID: \"1a6a914b-2623-44ba-a104-8006201e1852\") " pod="kube-system/coredns-76f75df574-ql4bt" Nov 12 20:56:54.276901 kubelet[3255]: I1112 20:56:53.951264 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2-tigera-ca-bundle\") pod \"calico-kube-controllers-7b77f44dcc-47lhq\" (UID: \"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2\") " pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" Nov 12 20:56:53.831993 systemd[1]: Created slice kubepods-burstable-pod31b5c95e_dc8f_4bde_b086_78c0f44c5289.slice - libcontainer container kubepods-burstable-pod31b5c95e_dc8f_4bde_b086_78c0f44c5289.slice. Nov 12 20:56:54.280476 kubelet[3255]: I1112 20:56:53.951339 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482-calico-apiserver-certs\") pod \"calico-apiserver-d9f56cd6-v2g28\" (UID: \"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482\") " pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" Nov 12 20:56:54.280476 kubelet[3255]: I1112 20:56:53.951379 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/c5c951fd-1efc-4251-a9ef-b6d54bc7597b-calico-apiserver-certs\") pod \"calico-apiserver-d9f56cd6-j2pht\" (UID: \"c5c951fd-1efc-4251-a9ef-b6d54bc7597b\") " pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" Nov 12 20:56:54.280476 kubelet[3255]: I1112 20:56:53.951429 3255 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31b5c95e-dc8f-4bde-b086-78c0f44c5289-config-volume\") pod \"coredns-76f75df574-zbhth\" (UID: \"31b5c95e-dc8f-4bde-b086-78c0f44c5289\") " pod="kube-system/coredns-76f75df574-zbhth" Nov 12 20:56:54.280659 containerd[1712]: time="2024-11-12T20:56:54.278694037Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fq22j,Uid:f56efb82-9a7d-420d-9381-5bbb29af7152,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:53.838110 systemd[1]: Created slice kubepods-besteffort-pod5a2a87e1_3ea6_4f5b_a14a_35a0a9288ab2.slice - libcontainer container kubepods-besteffort-pod5a2a87e1_3ea6_4f5b_a14a_35a0a9288ab2.slice. Nov 12 20:56:53.845826 systemd[1]: Created slice kubepods-besteffort-pod3d8a8d61_0a3b_4bd2_90b5_7da34e2b6482.slice - libcontainer container kubepods-besteffort-pod3d8a8d61_0a3b_4bd2_90b5_7da34e2b6482.slice. Nov 12 20:56:53.853636 systemd[1]: Created slice kubepods-besteffort-podc5c951fd_1efc_4251_a9ef_b6d54bc7597b.slice - libcontainer container kubepods-besteffort-podc5c951fd_1efc_4251_a9ef_b6d54bc7597b.slice. Nov 12 20:56:54.157003 systemd[1]: Created slice kubepods-besteffort-podf56efb82_9a7d_420d_9381_5bbb29af7152.slice - libcontainer container kubepods-besteffort-podf56efb82_9a7d_420d_9381_5bbb29af7152.slice. Nov 12 20:56:54.577174 containerd[1712]: time="2024-11-12T20:56:54.576921707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ql4bt,Uid:1a6a914b-2623-44ba-a104-8006201e1852,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:54.582677 containerd[1712]: time="2024-11-12T20:56:54.582633781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-j2pht,Uid:c5c951fd-1efc-4251-a9ef-b6d54bc7597b,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:54.588551 containerd[1712]: time="2024-11-12T20:56:54.588331355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zbhth,Uid:31b5c95e-dc8f-4bde-b086-78c0f44c5289,Namespace:kube-system,Attempt:0,}" Nov 12 20:56:54.588551 containerd[1712]: time="2024-11-12T20:56:54.588378256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b77f44dcc-47lhq,Uid:5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2,Namespace:calico-system,Attempt:0,}" Nov 12 20:56:54.589924 containerd[1712]: time="2024-11-12T20:56:54.589895675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-v2g28,Uid:3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:56:55.393177 containerd[1712]: time="2024-11-12T20:56:55.393106498Z" level=info msg="shim disconnected" id=00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b namespace=k8s.io Nov 12 20:56:55.393704 containerd[1712]: time="2024-11-12T20:56:55.393244500Z" level=warning msg="cleaning up after shim disconnected" id=00d8949e9f04bd8b2d44ea4a5643fc85a286189cfc0a9198fd1eefa44b52f14b namespace=k8s.io Nov 12 20:56:55.393704 containerd[1712]: time="2024-11-12T20:56:55.393263500Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:56:55.744457 containerd[1712]: time="2024-11-12T20:56:55.744220855Z" level=error msg="Failed to destroy network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.744979 containerd[1712]: time="2024-11-12T20:56:55.744926064Z" level=error msg="encountered an error cleaning up failed sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.745075 containerd[1712]: time="2024-11-12T20:56:55.745010465Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-j2pht,Uid:c5c951fd-1efc-4251-a9ef-b6d54bc7597b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.745483 kubelet[3255]: E1112 20:56:55.745321 3255 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.745483 kubelet[3255]: E1112 20:56:55.745403 3255 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" Nov 12 20:56:55.745483 kubelet[3255]: E1112 20:56:55.745435 3255 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" Nov 12 20:56:55.745927 kubelet[3255]: E1112 20:56:55.745510 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d9f56cd6-j2pht_calico-apiserver(c5c951fd-1efc-4251-a9ef-b6d54bc7597b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d9f56cd6-j2pht_calico-apiserver(c5c951fd-1efc-4251-a9ef-b6d54bc7597b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" podUID="c5c951fd-1efc-4251-a9ef-b6d54bc7597b" Nov 12 20:56:55.772818 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284-shm.mount: Deactivated successfully. Nov 12 20:56:55.788985 containerd[1712]: time="2024-11-12T20:56:55.788684332Z" level=error msg="Failed to destroy network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.789987 containerd[1712]: time="2024-11-12T20:56:55.789931948Z" level=error msg="encountered an error cleaning up failed sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.790865 containerd[1712]: time="2024-11-12T20:56:55.790815059Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zbhth,Uid:31b5c95e-dc8f-4bde-b086-78c0f44c5289,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.794550 kubelet[3255]: E1112 20:56:55.793552 3255 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.794550 kubelet[3255]: E1112 20:56:55.793607 3255 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zbhth" Nov 12 20:56:55.794550 kubelet[3255]: E1112 20:56:55.793636 3255 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-zbhth" Nov 12 20:56:55.794294 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4-shm.mount: Deactivated successfully. Nov 12 20:56:55.794812 kubelet[3255]: E1112 20:56:55.793698 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-zbhth_kube-system(31b5c95e-dc8f-4bde-b086-78c0f44c5289)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-zbhth_kube-system(31b5c95e-dc8f-4bde-b086-78c0f44c5289)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zbhth" podUID="31b5c95e-dc8f-4bde-b086-78c0f44c5289" Nov 12 20:56:55.829762 containerd[1712]: time="2024-11-12T20:56:55.829712064Z" level=error msg="Failed to destroy network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.833692 containerd[1712]: time="2024-11-12T20:56:55.833124308Z" level=error msg="encountered an error cleaning up failed sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.835213 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47-shm.mount: Deactivated successfully. Nov 12 20:56:55.838335 containerd[1712]: time="2024-11-12T20:56:55.838291475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ql4bt,Uid:1a6a914b-2623-44ba-a104-8006201e1852,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.838973 kubelet[3255]: E1112 20:56:55.838936 3255 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.839076 kubelet[3255]: E1112 20:56:55.839024 3255 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ql4bt" Nov 12 20:56:55.839076 kubelet[3255]: E1112 20:56:55.839069 3255 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-76f75df574-ql4bt" Nov 12 20:56:55.839253 kubelet[3255]: E1112 20:56:55.839232 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-ql4bt_kube-system(1a6a914b-2623-44ba-a104-8006201e1852)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-ql4bt_kube-system(1a6a914b-2623-44ba-a104-8006201e1852)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ql4bt" podUID="1a6a914b-2623-44ba-a104-8006201e1852" Nov 12 20:56:55.844770 containerd[1712]: time="2024-11-12T20:56:55.844306253Z" level=error msg="Failed to destroy network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.844770 containerd[1712]: time="2024-11-12T20:56:55.844645958Z" level=error msg="encountered an error cleaning up failed sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.844770 containerd[1712]: time="2024-11-12T20:56:55.844701159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fq22j,Uid:f56efb82-9a7d-420d-9381-5bbb29af7152,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.847433 kubelet[3255]: E1112 20:56:55.845481 3255 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.847433 kubelet[3255]: E1112 20:56:55.845540 3255 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:55.847433 kubelet[3255]: E1112 20:56:55.845569 3255 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-fq22j" Nov 12 20:56:55.847624 kubelet[3255]: E1112 20:56:55.845628 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-fq22j_calico-system(f56efb82-9a7d-420d-9381-5bbb29af7152)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-fq22j_calico-system(f56efb82-9a7d-420d-9381-5bbb29af7152)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:55.848643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a-shm.mount: Deactivated successfully. Nov 12 20:56:55.852018 containerd[1712]: time="2024-11-12T20:56:55.851966853Z" level=error msg="Failed to destroy network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.855501 containerd[1712]: time="2024-11-12T20:56:55.855467098Z" level=error msg="encountered an error cleaning up failed sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.855942 containerd[1712]: time="2024-11-12T20:56:55.855751402Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b77f44dcc-47lhq,Uid:5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.855876 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde-shm.mount: Deactivated successfully. Nov 12 20:56:55.856565 kubelet[3255]: E1112 20:56:55.856398 3255 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.856565 kubelet[3255]: E1112 20:56:55.856442 3255 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" Nov 12 20:56:55.856565 kubelet[3255]: E1112 20:56:55.856466 3255 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" Nov 12 20:56:55.856926 kubelet[3255]: E1112 20:56:55.856525 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7b77f44dcc-47lhq_calico-system(5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7b77f44dcc-47lhq_calico-system(5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" podUID="5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2" Nov 12 20:56:55.862906 containerd[1712]: time="2024-11-12T20:56:55.862780393Z" level=error msg="Failed to destroy network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.863364 containerd[1712]: time="2024-11-12T20:56:55.863291800Z" level=error msg="encountered an error cleaning up failed sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.863553 containerd[1712]: time="2024-11-12T20:56:55.863465602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-v2g28,Uid:3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.863795 kubelet[3255]: E1112 20:56:55.863737 3255 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:55.863795 kubelet[3255]: E1112 20:56:55.863783 3255 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" Nov 12 20:56:55.863920 kubelet[3255]: E1112 20:56:55.863811 3255 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" Nov 12 20:56:55.863920 kubelet[3255]: E1112 20:56:55.863885 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-d9f56cd6-v2g28_calico-apiserver(3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-d9f56cd6-v2g28_calico-apiserver(3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" podUID="3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482" Nov 12 20:56:56.301797 kubelet[3255]: I1112 20:56:56.301755 3255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:56:56.303252 containerd[1712]: time="2024-11-12T20:56:56.302757103Z" level=info msg="StopPodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\"" Nov 12 20:56:56.303252 containerd[1712]: time="2024-11-12T20:56:56.302951805Z" level=info msg="Ensure that sandbox 6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a in task-service has been cleanup successfully" Nov 12 20:56:56.303986 kubelet[3255]: I1112 20:56:56.303948 3255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:56:56.305981 containerd[1712]: time="2024-11-12T20:56:56.305950344Z" level=info msg="StopPodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\"" Nov 12 20:56:56.306164 containerd[1712]: time="2024-11-12T20:56:56.306128546Z" level=info msg="Ensure that sandbox e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47 in task-service has been cleanup successfully" Nov 12 20:56:56.311919 containerd[1712]: time="2024-11-12T20:56:56.311706519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:56:56.312486 kubelet[3255]: I1112 20:56:56.312453 3255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:56:56.314092 containerd[1712]: time="2024-11-12T20:56:56.314066849Z" level=info msg="StopPodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\"" Nov 12 20:56:56.314532 containerd[1712]: time="2024-11-12T20:56:56.314356753Z" level=info msg="Ensure that sandbox df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde in task-service has been cleanup successfully" Nov 12 20:56:56.323389 kubelet[3255]: I1112 20:56:56.323359 3255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:56:56.324692 containerd[1712]: time="2024-11-12T20:56:56.324665087Z" level=info msg="StopPodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\"" Nov 12 20:56:56.325215 containerd[1712]: time="2024-11-12T20:56:56.324968791Z" level=info msg="Ensure that sandbox 0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284 in task-service has been cleanup successfully" Nov 12 20:56:56.331760 kubelet[3255]: I1112 20:56:56.331439 3255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:56:56.332234 containerd[1712]: time="2024-11-12T20:56:56.331985282Z" level=info msg="StopPodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\"" Nov 12 20:56:56.333225 containerd[1712]: time="2024-11-12T20:56:56.332176684Z" level=info msg="Ensure that sandbox 56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725 in task-service has been cleanup successfully" Nov 12 20:56:56.341220 kubelet[3255]: I1112 20:56:56.340861 3255 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:56:56.342257 containerd[1712]: time="2024-11-12T20:56:56.341742508Z" level=info msg="StopPodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\"" Nov 12 20:56:56.345032 containerd[1712]: time="2024-11-12T20:56:56.344917850Z" level=info msg="Ensure that sandbox 12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4 in task-service has been cleanup successfully" Nov 12 20:56:56.438868 containerd[1712]: time="2024-11-12T20:56:56.438662166Z" level=error msg="StopPodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" failed" error="failed to destroy network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:56.440173 kubelet[3255]: E1112 20:56:56.439523 3255 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:56:56.440173 kubelet[3255]: E1112 20:56:56.439623 3255 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde"} Nov 12 20:56:56.440173 kubelet[3255]: E1112 20:56:56.439674 3255 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:56.440173 kubelet[3255]: E1112 20:56:56.439713 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" podUID="5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2" Nov 12 20:56:56.440546 containerd[1712]: time="2024-11-12T20:56:56.440351488Z" level=error msg="StopPodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" failed" error="failed to destroy network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:56.440777 kubelet[3255]: E1112 20:56:56.440751 3255 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:56:56.440864 kubelet[3255]: E1112 20:56:56.440797 3255 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47"} Nov 12 20:56:56.440864 kubelet[3255]: E1112 20:56:56.440841 3255 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"1a6a914b-2623-44ba-a104-8006201e1852\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:56.440982 kubelet[3255]: E1112 20:56:56.440875 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"1a6a914b-2623-44ba-a104-8006201e1852\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-ql4bt" podUID="1a6a914b-2623-44ba-a104-8006201e1852" Nov 12 20:56:56.447638 containerd[1712]: time="2024-11-12T20:56:56.447299578Z" level=error msg="StopPodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" failed" error="failed to destroy network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:56.447737 kubelet[3255]: E1112 20:56:56.447507 3255 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:56:56.447737 kubelet[3255]: E1112 20:56:56.447542 3255 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a"} Nov 12 20:56:56.447737 kubelet[3255]: E1112 20:56:56.447590 3255 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f56efb82-9a7d-420d-9381-5bbb29af7152\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:56.447737 kubelet[3255]: E1112 20:56:56.447625 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f56efb82-9a7d-420d-9381-5bbb29af7152\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-fq22j" podUID="f56efb82-9a7d-420d-9381-5bbb29af7152" Nov 12 20:56:56.449429 containerd[1712]: time="2024-11-12T20:56:56.449222103Z" level=error msg="StopPodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" failed" error="failed to destroy network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:56.449534 kubelet[3255]: E1112 20:56:56.449453 3255 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:56:56.449534 kubelet[3255]: E1112 20:56:56.449490 3255 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284"} Nov 12 20:56:56.449636 kubelet[3255]: E1112 20:56:56.449538 3255 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"c5c951fd-1efc-4251-a9ef-b6d54bc7597b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:56.449636 kubelet[3255]: E1112 20:56:56.449574 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"c5c951fd-1efc-4251-a9ef-b6d54bc7597b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" podUID="c5c951fd-1efc-4251-a9ef-b6d54bc7597b" Nov 12 20:56:56.457328 containerd[1712]: time="2024-11-12T20:56:56.457270008Z" level=error msg="StopPodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" failed" error="failed to destroy network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:56.457807 kubelet[3255]: E1112 20:56:56.457674 3255 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:56:56.457807 kubelet[3255]: E1112 20:56:56.457708 3255 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725"} Nov 12 20:56:56.457807 kubelet[3255]: E1112 20:56:56.457747 3255 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:56.457807 kubelet[3255]: E1112 20:56:56.457789 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" podUID="3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482" Nov 12 20:56:56.459771 containerd[1712]: time="2024-11-12T20:56:56.459723339Z" level=error msg="StopPodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" failed" error="failed to destroy network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:56:56.459922 kubelet[3255]: E1112 20:56:56.459902 3255 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:56:56.459990 kubelet[3255]: E1112 20:56:56.459934 3255 kuberuntime_manager.go:1381] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4"} Nov 12 20:56:56.459990 kubelet[3255]: E1112 20:56:56.459976 3255 kuberuntime_manager.go:1081] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"31b5c95e-dc8f-4bde-b086-78c0f44c5289\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:56:56.460093 kubelet[3255]: E1112 20:56:56.460012 3255 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"31b5c95e-dc8f-4bde-b086-78c0f44c5289\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-76f75df574-zbhth" podUID="31b5c95e-dc8f-4bde-b086-78c0f44c5289" Nov 12 20:56:56.761603 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725-shm.mount: Deactivated successfully. Nov 12 20:57:00.799104 kubelet[3255]: I1112 20:57:00.799069 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:01.672809 update_engine[1692]: I20241112 20:57:01.672752 1692 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Nov 12 20:57:01.672809 update_engine[1692]: I20241112 20:57:01.672811 1692 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Nov 12 20:57:01.673838 update_engine[1692]: I20241112 20:57:01.673023 1692 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Nov 12 20:57:01.674270 update_engine[1692]: I20241112 20:57:01.674069 1692 omaha_request_params.cc:62] Current group set to stable Nov 12 20:57:01.674270 update_engine[1692]: I20241112 20:57:01.674211 1692 update_attempter.cc:499] Already updated boot flags. Skipping. Nov 12 20:57:01.674270 update_engine[1692]: I20241112 20:57:01.674225 1692 update_attempter.cc:643] Scheduling an action processor start. Nov 12 20:57:01.674270 update_engine[1692]: I20241112 20:57:01.674244 1692 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:57:01.674462 update_engine[1692]: I20241112 20:57:01.674278 1692 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Nov 12 20:57:01.674462 update_engine[1692]: I20241112 20:57:01.674346 1692 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:57:01.674462 update_engine[1692]: I20241112 20:57:01.674356 1692 omaha_request_action.cc:272] Request: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: Nov 12 20:57:01.674462 update_engine[1692]: I20241112 20:57:01.674364 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:57:01.676227 locksmithd[1755]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Nov 12 20:57:01.676715 update_engine[1692]: I20241112 20:57:01.676126 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:57:01.676715 update_engine[1692]: I20241112 20:57:01.676586 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:57:01.705434 update_engine[1692]: E20241112 20:57:01.705399 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:57:01.705527 update_engine[1692]: I20241112 20:57:01.705485 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Nov 12 20:57:02.274374 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1601180681.mount: Deactivated successfully. Nov 12 20:57:02.317201 containerd[1712]: time="2024-11-12T20:57:02.317152855Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:02.319359 containerd[1712]: time="2024-11-12T20:57:02.319298382Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:57:02.322005 containerd[1712]: time="2024-11-12T20:57:02.321956015Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:02.326120 containerd[1712]: time="2024-11-12T20:57:02.326089267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:02.327080 containerd[1712]: time="2024-11-12T20:57:02.326624974Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 6.014879054s" Nov 12 20:57:02.327080 containerd[1712]: time="2024-11-12T20:57:02.326665274Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:57:02.338129 containerd[1712]: time="2024-11-12T20:57:02.338098417Z" level=info msg="CreateContainer within sandbox \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:57:02.389119 containerd[1712]: time="2024-11-12T20:57:02.389080854Z" level=info msg="CreateContainer within sandbox \"b560b38d7cb068e1f9ee8b882c545353d158e9650b272477002770aa1094da5c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"537c5bc4df77fc0a7e95d482aed939d3ce1dbc5b7e4285f4a0ab1681cbf1cbbd\"" Nov 12 20:57:02.389686 containerd[1712]: time="2024-11-12T20:57:02.389570461Z" level=info msg="StartContainer for \"537c5bc4df77fc0a7e95d482aed939d3ce1dbc5b7e4285f4a0ab1681cbf1cbbd\"" Nov 12 20:57:02.420388 systemd[1]: Started cri-containerd-537c5bc4df77fc0a7e95d482aed939d3ce1dbc5b7e4285f4a0ab1681cbf1cbbd.scope - libcontainer container 537c5bc4df77fc0a7e95d482aed939d3ce1dbc5b7e4285f4a0ab1681cbf1cbbd. Nov 12 20:57:02.450737 containerd[1712]: time="2024-11-12T20:57:02.450617924Z" level=info msg="StartContainer for \"537c5bc4df77fc0a7e95d482aed939d3ce1dbc5b7e4285f4a0ab1681cbf1cbbd\" returns successfully" Nov 12 20:57:02.572873 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:57:02.573002 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:57:03.388884 kubelet[3255]: I1112 20:57:03.387924 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-7pcg6" podStartSLOduration=1.9249446 podStartE2EDuration="21.387869842s" podCreationTimestamp="2024-11-12 20:56:42 +0000 UTC" firstStartedPulling="2024-11-12 20:56:42.864004735 +0000 UTC m=+17.938705834" lastFinishedPulling="2024-11-12 20:57:02.326929977 +0000 UTC m=+37.401631076" observedRunningTime="2024-11-12 20:57:03.387366436 +0000 UTC m=+38.462067635" watchObservedRunningTime="2024-11-12 20:57:03.387869842 +0000 UTC m=+38.462570941" Nov 12 20:57:04.242222 kernel: bpftool[4532]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:57:04.511000 systemd-networkd[1569]: vxlan.calico: Link UP Nov 12 20:57:04.511009 systemd-networkd[1569]: vxlan.calico: Gained carrier Nov 12 20:57:06.131394 systemd-networkd[1569]: vxlan.calico: Gained IPv6LL Nov 12 20:57:07.153048 containerd[1712]: time="2024-11-12T20:57:07.152786171Z" level=info msg="StopPodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\"" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.202 [INFO][4644] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.203 [INFO][4644] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" iface="eth0" netns="/var/run/netns/cni-3a0feb5b-cf58-0b02-22ff-11bedceb905d" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.204 [INFO][4644] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" iface="eth0" netns="/var/run/netns/cni-3a0feb5b-cf58-0b02-22ff-11bedceb905d" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.205 [INFO][4644] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" iface="eth0" netns="/var/run/netns/cni-3a0feb5b-cf58-0b02-22ff-11bedceb905d" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.205 [INFO][4644] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.205 [INFO][4644] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.224 [INFO][4651] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.224 [INFO][4651] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.224 [INFO][4651] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.230 [WARNING][4651] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.230 [INFO][4651] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.232 [INFO][4651] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:07.235902 containerd[1712]: 2024-11-12 20:57:07.234 [INFO][4644] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:07.237079 containerd[1712]: time="2024-11-12T20:57:07.236894946Z" level=info msg="TearDown network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" successfully" Nov 12 20:57:07.237079 containerd[1712]: time="2024-11-12T20:57:07.236937447Z" level=info msg="StopPodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" returns successfully" Nov 12 20:57:07.237969 containerd[1712]: time="2024-11-12T20:57:07.237934159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zbhth,Uid:31b5c95e-dc8f-4bde-b086-78c0f44c5289,Namespace:kube-system,Attempt:1,}" Nov 12 20:57:07.241898 systemd[1]: run-netns-cni\x2d3a0feb5b\x2dcf58\x2d0b02\x2d22ff\x2d11bedceb905d.mount: Deactivated successfully. Nov 12 20:57:07.375930 systemd-networkd[1569]: caliaa8d09fbe38: Link UP Nov 12 20:57:07.376156 systemd-networkd[1569]: caliaa8d09fbe38: Gained carrier Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.310 [INFO][4657] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0 coredns-76f75df574- kube-system 31b5c95e-dc8f-4bde-b086-78c0f44c5289 753 0 2024-11-12 20:56:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-d8aa37ea01 coredns-76f75df574-zbhth eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaa8d09fbe38 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.310 [INFO][4657] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.336 [INFO][4668] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" HandleID="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.345 [INFO][4668] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" HandleID="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-d8aa37ea01", "pod":"coredns-76f75df574-zbhth", "timestamp":"2024-11-12 20:57:07.336198416 +0000 UTC"}, Hostname:"ci-4081.2.0-a-d8aa37ea01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.345 [INFO][4668] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.345 [INFO][4668] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.345 [INFO][4668] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-d8aa37ea01' Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.346 [INFO][4668] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.349 [INFO][4668] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.353 [INFO][4668] ipam/ipam.go 489: Trying affinity for 192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.354 [INFO][4668] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.356 [INFO][4668] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.356 [INFO][4668] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.0/26 handle="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.357 [INFO][4668] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.363 [INFO][4668] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.0/26 handle="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.370 [INFO][4668] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.1/26] block=192.168.3.0/26 handle="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.370 [INFO][4668] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.1/26] handle="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.370 [INFO][4668] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:07.398244 containerd[1712]: 2024-11-12 20:57:07.370 [INFO][4668] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.1/26] IPv6=[] ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" HandleID="k8s-pod-network.b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.400376 containerd[1712]: 2024-11-12 20:57:07.372 [INFO][4657] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31b5c95e-dc8f-4bde-b086-78c0f44c5289", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"", Pod:"coredns-76f75df574-zbhth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa8d09fbe38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:07.400376 containerd[1712]: 2024-11-12 20:57:07.372 [INFO][4657] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.1/32] ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.400376 containerd[1712]: 2024-11-12 20:57:07.372 [INFO][4657] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaa8d09fbe38 ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.400376 containerd[1712]: 2024-11-12 20:57:07.374 [INFO][4657] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.400376 containerd[1712]: 2024-11-12 20:57:07.375 [INFO][4657] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31b5c95e-dc8f-4bde-b086-78c0f44c5289", ResourceVersion:"753", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb", Pod:"coredns-76f75df574-zbhth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa8d09fbe38", MAC:"8a:ff:77:5e:96:f3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:07.400376 containerd[1712]: 2024-11-12 20:57:07.394 [INFO][4657] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb" Namespace="kube-system" Pod="coredns-76f75df574-zbhth" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:07.426281 containerd[1712]: time="2024-11-12T20:57:07.425944663Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:07.426281 containerd[1712]: time="2024-11-12T20:57:07.426015564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:07.426281 containerd[1712]: time="2024-11-12T20:57:07.426053665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:07.426523 containerd[1712]: time="2024-11-12T20:57:07.426181166Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:07.459354 systemd[1]: Started cri-containerd-b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb.scope - libcontainer container b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb. Nov 12 20:57:07.498526 containerd[1712]: time="2024-11-12T20:57:07.498392389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-zbhth,Uid:31b5c95e-dc8f-4bde-b086-78c0f44c5289,Namespace:kube-system,Attempt:1,} returns sandbox id \"b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb\"" Nov 12 20:57:07.501965 containerd[1712]: time="2024-11-12T20:57:07.501927235Z" level=info msg="CreateContainer within sandbox \"b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:57:07.541526 containerd[1712]: time="2024-11-12T20:57:07.541477340Z" level=info msg="CreateContainer within sandbox \"b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e936336118793a51f4446c6c48fb150448d1123f47bcc3f04356e5594f35ae75\"" Nov 12 20:57:07.542577 containerd[1712]: time="2024-11-12T20:57:07.542455253Z" level=info msg="StartContainer for \"e936336118793a51f4446c6c48fb150448d1123f47bcc3f04356e5594f35ae75\"" Nov 12 20:57:07.570338 systemd[1]: Started cri-containerd-e936336118793a51f4446c6c48fb150448d1123f47bcc3f04356e5594f35ae75.scope - libcontainer container e936336118793a51f4446c6c48fb150448d1123f47bcc3f04356e5594f35ae75. Nov 12 20:57:07.599652 containerd[1712]: time="2024-11-12T20:57:07.599613184Z" level=info msg="StartContainer for \"e936336118793a51f4446c6c48fb150448d1123f47bcc3f04356e5594f35ae75\" returns successfully" Nov 12 20:57:08.152275 containerd[1712]: time="2024-11-12T20:57:08.151775243Z" level=info msg="StopPodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\"" Nov 12 20:57:08.152499 containerd[1712]: time="2024-11-12T20:57:08.152276950Z" level=info msg="StopPodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\"" Nov 12 20:57:08.156017 containerd[1712]: time="2024-11-12T20:57:08.155608992Z" level=info msg="StopPodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\"" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.243 [INFO][4792] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.247 [INFO][4792] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" iface="eth0" netns="/var/run/netns/cni-79dcd4f8-6bb1-bb43-ebdc-52446ca083a3" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.248 [INFO][4792] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" iface="eth0" netns="/var/run/netns/cni-79dcd4f8-6bb1-bb43-ebdc-52446ca083a3" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.249 [INFO][4792] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" iface="eth0" netns="/var/run/netns/cni-79dcd4f8-6bb1-bb43-ebdc-52446ca083a3" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.249 [INFO][4792] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.250 [INFO][4792] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.295 [INFO][4822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.295 [INFO][4822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.295 [INFO][4822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.305 [WARNING][4822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.305 [INFO][4822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.308 [INFO][4822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:08.313030 containerd[1712]: 2024-11-12 20:57:08.309 [INFO][4792] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:08.314487 containerd[1712]: time="2024-11-12T20:57:08.313924217Z" level=info msg="TearDown network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" successfully" Nov 12 20:57:08.314487 containerd[1712]: time="2024-11-12T20:57:08.313960117Z" level=info msg="StopPodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" returns successfully" Nov 12 20:57:08.314856 containerd[1712]: time="2024-11-12T20:57:08.314833428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ql4bt,Uid:1a6a914b-2623-44ba-a104-8006201e1852,Namespace:kube-system,Attempt:1,}" Nov 12 20:57:08.319211 systemd[1]: run-netns-cni\x2d79dcd4f8\x2d6bb1\x2dbb43\x2debdc\x2d52446ca083a3.mount: Deactivated successfully. Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.265 [INFO][4807] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.265 [INFO][4807] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" iface="eth0" netns="/var/run/netns/cni-f89c0ec3-aa65-fdcf-8788-e499824b78ab" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.265 [INFO][4807] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" iface="eth0" netns="/var/run/netns/cni-f89c0ec3-aa65-fdcf-8788-e499824b78ab" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.266 [INFO][4807] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" iface="eth0" netns="/var/run/netns/cni-f89c0ec3-aa65-fdcf-8788-e499824b78ab" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.266 [INFO][4807] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.266 [INFO][4807] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.311 [INFO][4829] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.311 [INFO][4829] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.311 [INFO][4829] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.323 [WARNING][4829] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.323 [INFO][4829] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.325 [INFO][4829] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:08.327384 containerd[1712]: 2024-11-12 20:57:08.326 [INFO][4807] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:08.327949 containerd[1712]: time="2024-11-12T20:57:08.327674192Z" level=info msg="TearDown network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" successfully" Nov 12 20:57:08.327949 containerd[1712]: time="2024-11-12T20:57:08.327701193Z" level=info msg="StopPodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" returns successfully" Nov 12 20:57:08.332102 containerd[1712]: time="2024-11-12T20:57:08.330915834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-v2g28,Uid:3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:57:08.331828 systemd[1]: run-netns-cni\x2df89c0ec3\x2daa65\x2dfdcf\x2d8788\x2de499824b78ab.mount: Deactivated successfully. Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.245 [INFO][4803] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.246 [INFO][4803] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" iface="eth0" netns="/var/run/netns/cni-ef31c550-7a0c-a918-0ddc-ae0eb78205ce" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.247 [INFO][4803] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" iface="eth0" netns="/var/run/netns/cni-ef31c550-7a0c-a918-0ddc-ae0eb78205ce" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.248 [INFO][4803] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" iface="eth0" netns="/var/run/netns/cni-ef31c550-7a0c-a918-0ddc-ae0eb78205ce" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.248 [INFO][4803] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.248 [INFO][4803] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.312 [INFO][4821] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.312 [INFO][4821] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.325 [INFO][4821] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.336 [WARNING][4821] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.336 [INFO][4821] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.338 [INFO][4821] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:08.340112 containerd[1712]: 2024-11-12 20:57:08.338 [INFO][4803] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:08.343428 containerd[1712]: time="2024-11-12T20:57:08.341325067Z" level=info msg="TearDown network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" successfully" Nov 12 20:57:08.343428 containerd[1712]: time="2024-11-12T20:57:08.341350667Z" level=info msg="StopPodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" returns successfully" Nov 12 20:57:08.343428 containerd[1712]: time="2024-11-12T20:57:08.342499182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fq22j,Uid:f56efb82-9a7d-420d-9381-5bbb29af7152,Namespace:calico-system,Attempt:1,}" Nov 12 20:57:08.344472 systemd[1]: run-netns-cni\x2def31c550\x2d7a0c\x2da918\x2d0ddc\x2dae0eb78205ce.mount: Deactivated successfully. Nov 12 20:57:08.404270 kubelet[3255]: I1112 20:57:08.402450 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-zbhth" podStartSLOduration=33.402397148 podStartE2EDuration="33.402397148s" podCreationTimestamp="2024-11-12 20:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:08.38533623 +0000 UTC m=+43.460037429" watchObservedRunningTime="2024-11-12 20:57:08.402397148 +0000 UTC m=+43.477098347" Nov 12 20:57:08.435591 systemd-networkd[1569]: caliaa8d09fbe38: Gained IPv6LL Nov 12 20:57:08.616386 systemd-networkd[1569]: cali74bc018f327: Link UP Nov 12 20:57:08.616640 systemd-networkd[1569]: cali74bc018f327: Gained carrier Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.499 [INFO][4843] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0 coredns-76f75df574- kube-system 1a6a914b-2623-44ba-a104-8006201e1852 766 0 2024-11-12 20:56:35 +0000 UTC map[k8s-app:kube-dns pod-template-hash:76f75df574 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-d8aa37ea01 coredns-76f75df574-ql4bt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali74bc018f327 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.499 [INFO][4843] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.563 [INFO][4880] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" HandleID="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.583 [INFO][4880] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" HandleID="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319650), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-d8aa37ea01", "pod":"coredns-76f75df574-ql4bt", "timestamp":"2024-11-12 20:57:08.563512708 +0000 UTC"}, Hostname:"ci-4081.2.0-a-d8aa37ea01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.584 [INFO][4880] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.584 [INFO][4880] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.584 [INFO][4880] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-d8aa37ea01' Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.586 [INFO][4880] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.590 [INFO][4880] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.593 [INFO][4880] ipam/ipam.go 489: Trying affinity for 192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.594 [INFO][4880] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.596 [INFO][4880] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.596 [INFO][4880] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.0/26 handle="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.597 [INFO][4880] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3 Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.601 [INFO][4880] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.0/26 handle="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.608 [INFO][4880] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.2/26] block=192.168.3.0/26 handle="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.608 [INFO][4880] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.2/26] handle="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.608 [INFO][4880] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:08.637785 containerd[1712]: 2024-11-12 20:57:08.608 [INFO][4880] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.2/26] IPv6=[] ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" HandleID="k8s-pod-network.c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.638723 containerd[1712]: 2024-11-12 20:57:08.610 [INFO][4843] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1a6a914b-2623-44ba-a104-8006201e1852", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"", Pod:"coredns-76f75df574-ql4bt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74bc018f327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:08.638723 containerd[1712]: 2024-11-12 20:57:08.610 [INFO][4843] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.2/32] ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.638723 containerd[1712]: 2024-11-12 20:57:08.610 [INFO][4843] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali74bc018f327 ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.638723 containerd[1712]: 2024-11-12 20:57:08.614 [INFO][4843] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.638723 containerd[1712]: 2024-11-12 20:57:08.614 [INFO][4843] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1a6a914b-2623-44ba-a104-8006201e1852", ResourceVersion:"766", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3", Pod:"coredns-76f75df574-ql4bt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74bc018f327", MAC:"de:a1:4a:e9:9e:11", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:08.638723 containerd[1712]: 2024-11-12 20:57:08.635 [INFO][4843] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3" Namespace="kube-system" Pod="coredns-76f75df574-ql4bt" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:08.677606 systemd-networkd[1569]: calib3f2a70a269: Link UP Nov 12 20:57:08.679403 systemd-networkd[1569]: calib3f2a70a269: Gained carrier Nov 12 20:57:08.690675 containerd[1712]: time="2024-11-12T20:57:08.689473518Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:08.691217 containerd[1712]: time="2024-11-12T20:57:08.690891136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:08.692085 containerd[1712]: time="2024-11-12T20:57:08.691059738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:08.697224 containerd[1712]: time="2024-11-12T20:57:08.697005114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.515 [INFO][4855] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0 calico-apiserver-d9f56cd6- calico-apiserver 3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482 768 0 2024-11-12 20:56:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d9f56cd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-d8aa37ea01 calico-apiserver-d9f56cd6-v2g28 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib3f2a70a269 [] []}} ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.516 [INFO][4855] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.574 [INFO][4888] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" HandleID="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.587 [INFO][4888] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" HandleID="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030c9a0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-d8aa37ea01", "pod":"calico-apiserver-d9f56cd6-v2g28", "timestamp":"2024-11-12 20:57:08.574402747 +0000 UTC"}, Hostname:"ci-4081.2.0-a-d8aa37ea01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.587 [INFO][4888] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.608 [INFO][4888] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.608 [INFO][4888] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-d8aa37ea01' Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.610 [INFO][4888] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.620 [INFO][4888] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.628 [INFO][4888] ipam/ipam.go 489: Trying affinity for 192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.634 [INFO][4888] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.639 [INFO][4888] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.639 [INFO][4888] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.0/26 handle="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.642 [INFO][4888] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.652 [INFO][4888] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.0/26 handle="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.667 [INFO][4888] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.3/26] block=192.168.3.0/26 handle="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.667 [INFO][4888] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.3/26] handle="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.667 [INFO][4888] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:08.722958 containerd[1712]: 2024-11-12 20:57:08.667 [INFO][4888] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.3/26] IPv6=[] ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" HandleID="k8s-pod-network.a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.726049 containerd[1712]: 2024-11-12 20:57:08.671 [INFO][4855] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"", Pod:"calico-apiserver-d9f56cd6-v2g28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3f2a70a269", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:08.726049 containerd[1712]: 2024-11-12 20:57:08.671 [INFO][4855] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.3/32] ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.726049 containerd[1712]: 2024-11-12 20:57:08.671 [INFO][4855] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib3f2a70a269 ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.726049 containerd[1712]: 2024-11-12 20:57:08.679 [INFO][4855] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.726049 containerd[1712]: 2024-11-12 20:57:08.680 [INFO][4855] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482", ResourceVersion:"768", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb", Pod:"calico-apiserver-d9f56cd6-v2g28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3f2a70a269", MAC:"5a:3b:7c:b5:7e:62", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:08.726049 containerd[1712]: 2024-11-12 20:57:08.717 [INFO][4855] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-v2g28" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:08.723371 systemd[1]: Started cri-containerd-c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3.scope - libcontainer container c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3. Nov 12 20:57:08.766824 systemd-networkd[1569]: cali806f2408aee: Link UP Nov 12 20:57:08.767067 systemd-networkd[1569]: cali806f2408aee: Gained carrier Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.512 [INFO][4848] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0 csi-node-driver- calico-system f56efb82-9a7d-420d-9381-5bbb29af7152 767 0 2024-11-12 20:56:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:64dd8495dc k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.0-a-d8aa37ea01 csi-node-driver-fq22j eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali806f2408aee [] []}} ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.513 [INFO][4848] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.574 [INFO][4884] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" HandleID="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.588 [INFO][4884] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" HandleID="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318920), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-d8aa37ea01", "pod":"csi-node-driver-fq22j", "timestamp":"2024-11-12 20:57:08.574697251 +0000 UTC"}, Hostname:"ci-4081.2.0-a-d8aa37ea01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.588 [INFO][4884] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.667 [INFO][4884] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.667 [INFO][4884] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-d8aa37ea01' Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.671 [INFO][4884] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.682 [INFO][4884] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.699 [INFO][4884] ipam/ipam.go 489: Trying affinity for 192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.704 [INFO][4884] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.715 [INFO][4884] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.715 [INFO][4884] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.0/26 handle="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.718 [INFO][4884] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.729 [INFO][4884] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.0/26 handle="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.742 [INFO][4884] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.4/26] block=192.168.3.0/26 handle="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.742 [INFO][4884] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.4/26] handle="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.742 [INFO][4884] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:08.811445 containerd[1712]: 2024-11-12 20:57:08.742 [INFO][4884] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.4/26] IPv6=[] ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" HandleID="k8s-pod-network.428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.815275 containerd[1712]: 2024-11-12 20:57:08.757 [INFO][4848] cni-plugin/k8s.go 386: Populated endpoint ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f56efb82-9a7d-420d-9381-5bbb29af7152", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"", Pod:"csi-node-driver-fq22j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali806f2408aee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:08.815275 containerd[1712]: 2024-11-12 20:57:08.758 [INFO][4848] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.4/32] ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.815275 containerd[1712]: 2024-11-12 20:57:08.758 [INFO][4848] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali806f2408aee ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.815275 containerd[1712]: 2024-11-12 20:57:08.768 [INFO][4848] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.815275 containerd[1712]: 2024-11-12 20:57:08.770 [INFO][4848] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f56efb82-9a7d-420d-9381-5bbb29af7152", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e", Pod:"csi-node-driver-fq22j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali806f2408aee", MAC:"6e:0c:08:1c:85:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:08.815275 containerd[1712]: 2024-11-12 20:57:08.797 [INFO][4848] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e" Namespace="calico-system" Pod="csi-node-driver-fq22j" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:08.829097 containerd[1712]: time="2024-11-12T20:57:08.829061003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-ql4bt,Uid:1a6a914b-2623-44ba-a104-8006201e1852,Namespace:kube-system,Attempt:1,} returns sandbox id \"c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3\"" Nov 12 20:57:08.836023 containerd[1712]: time="2024-11-12T20:57:08.834584173Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:08.836023 containerd[1712]: time="2024-11-12T20:57:08.834641374Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:08.836023 containerd[1712]: time="2024-11-12T20:57:08.834661574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:08.836023 containerd[1712]: time="2024-11-12T20:57:08.834765976Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:08.837759 containerd[1712]: time="2024-11-12T20:57:08.837579012Z" level=info msg="CreateContainer within sandbox \"c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:57:08.864373 systemd[1]: Started cri-containerd-a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb.scope - libcontainer container a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb. Nov 12 20:57:08.870962 containerd[1712]: time="2024-11-12T20:57:08.868363205Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:08.870962 containerd[1712]: time="2024-11-12T20:57:08.868413706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:08.870962 containerd[1712]: time="2024-11-12T20:57:08.868428206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:08.870962 containerd[1712]: time="2024-11-12T20:57:08.868504007Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:08.900689 containerd[1712]: time="2024-11-12T20:57:08.900626418Z" level=info msg="CreateContainer within sandbox \"c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56cc4d93c92c873098e8f4ad030889aa3bf6ed75b089e4590e794ec9dea9c177\"" Nov 12 20:57:08.901367 systemd[1]: Started cri-containerd-428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e.scope - libcontainer container 428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e. Nov 12 20:57:08.902009 containerd[1712]: time="2024-11-12T20:57:08.901975435Z" level=info msg="StartContainer for \"56cc4d93c92c873098e8f4ad030889aa3bf6ed75b089e4590e794ec9dea9c177\"" Nov 12 20:57:08.933222 containerd[1712]: time="2024-11-12T20:57:08.932497625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-v2g28,Uid:3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb\"" Nov 12 20:57:08.940254 containerd[1712]: time="2024-11-12T20:57:08.940223424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:57:08.953452 systemd[1]: Started cri-containerd-56cc4d93c92c873098e8f4ad030889aa3bf6ed75b089e4590e794ec9dea9c177.scope - libcontainer container 56cc4d93c92c873098e8f4ad030889aa3bf6ed75b089e4590e794ec9dea9c177. Nov 12 20:57:08.967266 containerd[1712]: time="2024-11-12T20:57:08.967074467Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-fq22j,Uid:f56efb82-9a7d-420d-9381-5bbb29af7152,Namespace:calico-system,Attempt:1,} returns sandbox id \"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e\"" Nov 12 20:57:08.992661 containerd[1712]: time="2024-11-12T20:57:08.992548993Z" level=info msg="StartContainer for \"56cc4d93c92c873098e8f4ad030889aa3bf6ed75b089e4590e794ec9dea9c177\" returns successfully" Nov 12 20:57:09.154652 containerd[1712]: time="2024-11-12T20:57:09.153931357Z" level=info msg="StopPodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\"" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.204 [INFO][5112] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.204 [INFO][5112] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" iface="eth0" netns="/var/run/netns/cni-3229b948-599d-59de-b7c4-65a2de84c0af" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.204 [INFO][5112] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" iface="eth0" netns="/var/run/netns/cni-3229b948-599d-59de-b7c4-65a2de84c0af" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.206 [INFO][5112] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" iface="eth0" netns="/var/run/netns/cni-3229b948-599d-59de-b7c4-65a2de84c0af" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.206 [INFO][5112] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.206 [INFO][5112] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.224 [INFO][5118] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.224 [INFO][5118] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.225 [INFO][5118] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.230 [WARNING][5118] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.230 [INFO][5118] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.231 [INFO][5118] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:09.233795 containerd[1712]: 2024-11-12 20:57:09.232 [INFO][5112] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:09.234770 containerd[1712]: time="2024-11-12T20:57:09.233913379Z" level=info msg="TearDown network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" successfully" Nov 12 20:57:09.234770 containerd[1712]: time="2024-11-12T20:57:09.233949080Z" level=info msg="StopPodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" returns successfully" Nov 12 20:57:09.234770 containerd[1712]: time="2024-11-12T20:57:09.234735890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-j2pht,Uid:c5c951fd-1efc-4251-a9ef-b6d54bc7597b,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:57:09.250764 systemd[1]: run-netns-cni\x2d3229b948\x2d599d\x2d59de\x2db7c4\x2d65a2de84c0af.mount: Deactivated successfully. Nov 12 20:57:09.398986 kubelet[3255]: I1112 20:57:09.397093 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-ql4bt" podStartSLOduration=34.397001164 podStartE2EDuration="34.397001164s" podCreationTimestamp="2024-11-12 20:56:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:57:09.394103327 +0000 UTC m=+44.468804526" watchObservedRunningTime="2024-11-12 20:57:09.397001164 +0000 UTC m=+44.471702263" Nov 12 20:57:09.400409 systemd-networkd[1569]: calie0d3b55592f: Link UP Nov 12 20:57:09.400935 systemd-networkd[1569]: calie0d3b55592f: Gained carrier Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.325 [INFO][5124] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0 calico-apiserver-d9f56cd6- calico-apiserver c5c951fd-1efc-4251-a9ef-b6d54bc7597b 794 0 2024-11-12 20:56:42 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:d9f56cd6 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-d8aa37ea01 calico-apiserver-d9f56cd6-j2pht eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calie0d3b55592f [] []}} ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.325 [INFO][5124] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.348 [INFO][5135] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" HandleID="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.358 [INFO][5135] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" HandleID="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001fe170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-d8aa37ea01", "pod":"calico-apiserver-d9f56cd6-j2pht", "timestamp":"2024-11-12 20:57:09.348832849 +0000 UTC"}, Hostname:"ci-4081.2.0-a-d8aa37ea01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.358 [INFO][5135] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.358 [INFO][5135] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.358 [INFO][5135] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-d8aa37ea01' Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.360 [INFO][5135] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.364 [INFO][5135] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.368 [INFO][5135] ipam/ipam.go 489: Trying affinity for 192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.369 [INFO][5135] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.371 [INFO][5135] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.371 [INFO][5135] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.0/26 handle="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.372 [INFO][5135] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.380 [INFO][5135] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.0/26 handle="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.389 [INFO][5135] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.5/26] block=192.168.3.0/26 handle="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.389 [INFO][5135] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.5/26] handle="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.389 [INFO][5135] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:09.439530 containerd[1712]: 2024-11-12 20:57:09.389 [INFO][5135] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.5/26] IPv6=[] ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" HandleID="k8s-pod-network.8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.443664 containerd[1712]: 2024-11-12 20:57:09.391 [INFO][5124] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5c951fd-1efc-4251-a9ef-b6d54bc7597b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"", Pod:"calico-apiserver-d9f56cd6-j2pht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0d3b55592f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:09.443664 containerd[1712]: 2024-11-12 20:57:09.391 [INFO][5124] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.5/32] ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.443664 containerd[1712]: 2024-11-12 20:57:09.391 [INFO][5124] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie0d3b55592f ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.443664 containerd[1712]: 2024-11-12 20:57:09.400 [INFO][5124] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.443664 containerd[1712]: 2024-11-12 20:57:09.403 [INFO][5124] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5c951fd-1efc-4251-a9ef-b6d54bc7597b", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e", Pod:"calico-apiserver-d9f56cd6-j2pht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0d3b55592f", MAC:"b2:ba:26:73:e9:93", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:09.443664 containerd[1712]: 2024-11-12 20:57:09.437 [INFO][5124] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e" Namespace="calico-apiserver" Pod="calico-apiserver-d9f56cd6-j2pht" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:09.472489 containerd[1712]: time="2024-11-12T20:57:09.472415029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:09.473089 containerd[1712]: time="2024-11-12T20:57:09.473047937Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:09.473235 containerd[1712]: time="2024-11-12T20:57:09.473215339Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:09.473499 containerd[1712]: time="2024-11-12T20:57:09.473446642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:09.500369 systemd[1]: Started cri-containerd-8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e.scope - libcontainer container 8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e. Nov 12 20:57:09.540913 containerd[1712]: time="2024-11-12T20:57:09.540502099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-d9f56cd6-j2pht,Uid:c5c951fd-1efc-4251-a9ef-b6d54bc7597b,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e\"" Nov 12 20:57:10.163570 systemd-networkd[1569]: cali74bc018f327: Gained IPv6LL Nov 12 20:57:10.548663 systemd-networkd[1569]: cali806f2408aee: Gained IPv6LL Nov 12 20:57:10.611587 systemd-networkd[1569]: calib3f2a70a269: Gained IPv6LL Nov 12 20:57:11.251449 systemd-networkd[1569]: calie0d3b55592f: Gained IPv6LL Nov 12 20:57:11.338232 containerd[1712]: time="2024-11-12T20:57:11.338164183Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:11.340464 containerd[1712]: time="2024-11-12T20:57:11.340417912Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:57:11.345854 containerd[1712]: time="2024-11-12T20:57:11.345798281Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:11.350159 containerd[1712]: time="2024-11-12T20:57:11.350056635Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:11.350965 containerd[1712]: time="2024-11-12T20:57:11.350786245Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 2.410390019s" Nov 12 20:57:11.350965 containerd[1712]: time="2024-11-12T20:57:11.350825545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:57:11.352379 containerd[1712]: time="2024-11-12T20:57:11.351832358Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:57:11.353898 containerd[1712]: time="2024-11-12T20:57:11.353867584Z" level=info msg="CreateContainer within sandbox \"a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:57:11.404734 containerd[1712]: time="2024-11-12T20:57:11.404697234Z" level=info msg="CreateContainer within sandbox \"a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"322a6c3a3b5f289c0ce9c3c2754de6bf686d1c1fd1a9b5c5b8d538329b914b93\"" Nov 12 20:57:11.405237 containerd[1712]: time="2024-11-12T20:57:11.405145440Z" level=info msg="StartContainer for \"322a6c3a3b5f289c0ce9c3c2754de6bf686d1c1fd1a9b5c5b8d538329b914b93\"" Nov 12 20:57:11.438336 systemd[1]: Started cri-containerd-322a6c3a3b5f289c0ce9c3c2754de6bf686d1c1fd1a9b5c5b8d538329b914b93.scope - libcontainer container 322a6c3a3b5f289c0ce9c3c2754de6bf686d1c1fd1a9b5c5b8d538329b914b93. Nov 12 20:57:11.484070 containerd[1712]: time="2024-11-12T20:57:11.484015648Z" level=info msg="StartContainer for \"322a6c3a3b5f289c0ce9c3c2754de6bf686d1c1fd1a9b5c5b8d538329b914b93\" returns successfully" Nov 12 20:57:11.665351 update_engine[1692]: I20241112 20:57:11.665182 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:57:11.665778 update_engine[1692]: I20241112 20:57:11.665498 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:57:11.665827 update_engine[1692]: I20241112 20:57:11.665769 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:57:11.693983 update_engine[1692]: E20241112 20:57:11.693864 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:57:11.693983 update_engine[1692]: I20241112 20:57:11.693944 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Nov 12 20:57:12.152252 containerd[1712]: time="2024-11-12T20:57:12.152175091Z" level=info msg="StopPodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\"" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.202 [INFO][5263] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.202 [INFO][5263] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" iface="eth0" netns="/var/run/netns/cni-b936a582-114f-fcb8-3db3-918f681b9b63" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.203 [INFO][5263] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" iface="eth0" netns="/var/run/netns/cni-b936a582-114f-fcb8-3db3-918f681b9b63" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.204 [INFO][5263] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" iface="eth0" netns="/var/run/netns/cni-b936a582-114f-fcb8-3db3-918f681b9b63" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.204 [INFO][5263] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.204 [INFO][5263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.233 [INFO][5269] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.233 [INFO][5269] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.233 [INFO][5269] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.240 [WARNING][5269] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.240 [INFO][5269] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.242 [INFO][5269] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.247054 containerd[1712]: 2024-11-12 20:57:12.245 [INFO][5263] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:12.250495 containerd[1712]: time="2024-11-12T20:57:12.248570524Z" level=info msg="TearDown network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" successfully" Nov 12 20:57:12.250495 containerd[1712]: time="2024-11-12T20:57:12.248626924Z" level=info msg="StopPodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" returns successfully" Nov 12 20:57:12.251150 containerd[1712]: time="2024-11-12T20:57:12.250869153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b77f44dcc-47lhq,Uid:5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2,Namespace:calico-system,Attempt:1,}" Nov 12 20:57:12.253653 systemd[1]: run-netns-cni\x2db936a582\x2d114f\x2dfcb8\x2d3db3\x2d918f681b9b63.mount: Deactivated successfully. Nov 12 20:57:12.410478 systemd-networkd[1569]: cali20908ffdcb8: Link UP Nov 12 20:57:12.410733 systemd-networkd[1569]: cali20908ffdcb8: Gained carrier Nov 12 20:57:12.438042 kubelet[3255]: I1112 20:57:12.438008 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d9f56cd6-v2g28" podStartSLOduration=28.02078664 podStartE2EDuration="30.437952345s" podCreationTimestamp="2024-11-12 20:56:42 +0000 UTC" firstStartedPulling="2024-11-12 20:57:08.934317849 +0000 UTC m=+44.009019048" lastFinishedPulling="2024-11-12 20:57:11.351483654 +0000 UTC m=+46.426184753" observedRunningTime="2024-11-12 20:57:12.423557161 +0000 UTC m=+47.498258360" watchObservedRunningTime="2024-11-12 20:57:12.437952345 +0000 UTC m=+47.512653444" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.328 [INFO][5277] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0 calico-kube-controllers-7b77f44dcc- calico-system 5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2 817 0 2024-11-12 20:56:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7b77f44dcc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-a-d8aa37ea01 calico-kube-controllers-7b77f44dcc-47lhq eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali20908ffdcb8 [] []}} ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.328 [INFO][5277] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.353 [INFO][5288] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" HandleID="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.362 [INFO][5288] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" HandleID="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002907f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-d8aa37ea01", "pod":"calico-kube-controllers-7b77f44dcc-47lhq", "timestamp":"2024-11-12 20:57:12.353958971 +0000 UTC"}, Hostname:"ci-4081.2.0-a-d8aa37ea01", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.362 [INFO][5288] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.362 [INFO][5288] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.362 [INFO][5288] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-d8aa37ea01' Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.364 [INFO][5288] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.367 [INFO][5288] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.370 [INFO][5288] ipam/ipam.go 489: Trying affinity for 192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.372 [INFO][5288] ipam/ipam.go 155: Attempting to load block cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.374 [INFO][5288] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.3.0/26 host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.374 [INFO][5288] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.3.0/26 handle="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.375 [INFO][5288] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0 Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.380 [INFO][5288] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.3.0/26 handle="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.397 [INFO][5288] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.3.6/26] block=192.168.3.0/26 handle="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.397 [INFO][5288] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.3.6/26] handle="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" host="ci-4081.2.0-a-d8aa37ea01" Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.397 [INFO][5288] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:12.441361 containerd[1712]: 2024-11-12 20:57:12.397 [INFO][5288] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.3.6/26] IPv6=[] ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" HandleID="k8s-pod-network.259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.442509 containerd[1712]: 2024-11-12 20:57:12.401 [INFO][5277] cni-plugin/k8s.go 386: Populated endpoint ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0", GenerateName:"calico-kube-controllers-7b77f44dcc-", Namespace:"calico-system", SelfLink:"", UID:"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b77f44dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"", Pod:"calico-kube-controllers-7b77f44dcc-47lhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20908ffdcb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.442509 containerd[1712]: 2024-11-12 20:57:12.402 [INFO][5277] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.3.6/32] ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.442509 containerd[1712]: 2024-11-12 20:57:12.402 [INFO][5277] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali20908ffdcb8 ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.442509 containerd[1712]: 2024-11-12 20:57:12.409 [INFO][5277] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.442509 containerd[1712]: 2024-11-12 20:57:12.409 [INFO][5277] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0", GenerateName:"calico-kube-controllers-7b77f44dcc-", Namespace:"calico-system", SelfLink:"", UID:"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b77f44dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0", Pod:"calico-kube-controllers-7b77f44dcc-47lhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20908ffdcb8", MAC:"c6:71:00:b8:4d:00", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:12.442509 containerd[1712]: 2024-11-12 20:57:12.436 [INFO][5277] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0" Namespace="calico-system" Pod="calico-kube-controllers-7b77f44dcc-47lhq" WorkloadEndpoint="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:12.487485 containerd[1712]: time="2024-11-12T20:57:12.487318376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:57:12.488390 containerd[1712]: time="2024-11-12T20:57:12.488163987Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:57:12.488390 containerd[1712]: time="2024-11-12T20:57:12.488265988Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:12.488703 containerd[1712]: time="2024-11-12T20:57:12.488372490Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:57:12.542356 systemd[1]: Started cri-containerd-259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0.scope - libcontainer container 259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0. Nov 12 20:57:12.587228 containerd[1712]: time="2024-11-12T20:57:12.587176453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7b77f44dcc-47lhq,Uid:5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2,Namespace:calico-system,Attempt:1,} returns sandbox id \"259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0\"" Nov 12 20:57:12.903148 containerd[1712]: time="2024-11-12T20:57:12.903083292Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:12.907870 containerd[1712]: time="2024-11-12T20:57:12.907708151Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:57:12.912483 containerd[1712]: time="2024-11-12T20:57:12.912445312Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:12.920051 containerd[1712]: time="2024-11-12T20:57:12.919847706Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:12.920051 containerd[1712]: time="2024-11-12T20:57:12.919905307Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.567936848s" Nov 12 20:57:12.920051 containerd[1712]: time="2024-11-12T20:57:12.919935907Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:57:12.922937 containerd[1712]: time="2024-11-12T20:57:12.922667742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:57:12.924555 containerd[1712]: time="2024-11-12T20:57:12.924433665Z" level=info msg="CreateContainer within sandbox \"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:57:12.968564 containerd[1712]: time="2024-11-12T20:57:12.968523629Z" level=info msg="CreateContainer within sandbox \"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"4468eee49f9769af47b54494ee2d13d6f3cc1d20f225d8e530c5844fb207e57d\"" Nov 12 20:57:12.970247 containerd[1712]: time="2024-11-12T20:57:12.969467741Z" level=info msg="StartContainer for \"4468eee49f9769af47b54494ee2d13d6f3cc1d20f225d8e530c5844fb207e57d\"" Nov 12 20:57:13.005899 systemd[1]: Started cri-containerd-4468eee49f9769af47b54494ee2d13d6f3cc1d20f225d8e530c5844fb207e57d.scope - libcontainer container 4468eee49f9769af47b54494ee2d13d6f3cc1d20f225d8e530c5844fb207e57d. Nov 12 20:57:13.060587 containerd[1712]: time="2024-11-12T20:57:13.060544705Z" level=info msg="StartContainer for \"4468eee49f9769af47b54494ee2d13d6f3cc1d20f225d8e530c5844fb207e57d\" returns successfully" Nov 12 20:57:13.241995 containerd[1712]: time="2024-11-12T20:57:13.241950025Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:13.243874 containerd[1712]: time="2024-11-12T20:57:13.243824348Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:57:13.245816 containerd[1712]: time="2024-11-12T20:57:13.245785574Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 323.07983ms" Nov 12 20:57:13.245895 containerd[1712]: time="2024-11-12T20:57:13.245816574Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:57:13.247028 containerd[1712]: time="2024-11-12T20:57:13.246437382Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:57:13.248280 containerd[1712]: time="2024-11-12T20:57:13.248082203Z" level=info msg="CreateContainer within sandbox \"8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:57:13.294813 containerd[1712]: time="2024-11-12T20:57:13.294766800Z" level=info msg="CreateContainer within sandbox \"8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"96fa80d94eca67d66b06732a1885b65893e3e17c2c588395966094aa73039440\"" Nov 12 20:57:13.295457 containerd[1712]: time="2024-11-12T20:57:13.295379608Z" level=info msg="StartContainer for \"96fa80d94eca67d66b06732a1885b65893e3e17c2c588395966094aa73039440\"" Nov 12 20:57:13.321369 systemd[1]: Started cri-containerd-96fa80d94eca67d66b06732a1885b65893e3e17c2c588395966094aa73039440.scope - libcontainer container 96fa80d94eca67d66b06732a1885b65893e3e17c2c588395966094aa73039440. Nov 12 20:57:13.383918 containerd[1712]: time="2024-11-12T20:57:13.383625136Z" level=info msg="StartContainer for \"96fa80d94eca67d66b06732a1885b65893e3e17c2c588395966094aa73039440\" returns successfully" Nov 12 20:57:13.412998 kubelet[3255]: I1112 20:57:13.412941 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:14.195342 systemd-networkd[1569]: cali20908ffdcb8: Gained IPv6LL Nov 12 20:57:14.415945 kubelet[3255]: I1112 20:57:14.415574 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:15.458930 containerd[1712]: time="2024-11-12T20:57:15.458881091Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:15.461481 containerd[1712]: time="2024-11-12T20:57:15.461335721Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:57:15.465048 containerd[1712]: time="2024-11-12T20:57:15.465013267Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:15.470974 containerd[1712]: time="2024-11-12T20:57:15.470906940Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:15.472091 containerd[1712]: time="2024-11-12T20:57:15.471683050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.225205467s" Nov 12 20:57:15.472091 containerd[1712]: time="2024-11-12T20:57:15.471726350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:57:15.473492 containerd[1712]: time="2024-11-12T20:57:15.473463572Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:57:15.489249 containerd[1712]: time="2024-11-12T20:57:15.488505558Z" level=info msg="CreateContainer within sandbox \"259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:57:15.538034 containerd[1712]: time="2024-11-12T20:57:15.537991273Z" level=info msg="CreateContainer within sandbox \"259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"2874c349e32675dddceaad07447b9210192d4c52cf79b815f0fcebf92d96211f\"" Nov 12 20:57:15.538616 containerd[1712]: time="2024-11-12T20:57:15.538540980Z" level=info msg="StartContainer for \"2874c349e32675dddceaad07447b9210192d4c52cf79b815f0fcebf92d96211f\"" Nov 12 20:57:15.570370 systemd[1]: Started cri-containerd-2874c349e32675dddceaad07447b9210192d4c52cf79b815f0fcebf92d96211f.scope - libcontainer container 2874c349e32675dddceaad07447b9210192d4c52cf79b815f0fcebf92d96211f. Nov 12 20:57:15.617064 containerd[1712]: time="2024-11-12T20:57:15.617008454Z" level=info msg="StartContainer for \"2874c349e32675dddceaad07447b9210192d4c52cf79b815f0fcebf92d96211f\" returns successfully" Nov 12 20:57:16.437014 kubelet[3255]: I1112 20:57:16.436977 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-d9f56cd6-j2pht" podStartSLOduration=30.732882379 podStartE2EDuration="34.436929538s" podCreationTimestamp="2024-11-12 20:56:42 +0000 UTC" firstStartedPulling="2024-11-12 20:57:09.54216602 +0000 UTC m=+44.616867119" lastFinishedPulling="2024-11-12 20:57:13.246213079 +0000 UTC m=+48.320914278" observedRunningTime="2024-11-12 20:57:13.4277051 +0000 UTC m=+48.502406299" watchObservedRunningTime="2024-11-12 20:57:16.436929538 +0000 UTC m=+51.511630637" Nov 12 20:57:16.437979 kubelet[3255]: I1112 20:57:16.437348 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7b77f44dcc-47lhq" podStartSLOduration=31.554004261 podStartE2EDuration="34.437309642s" podCreationTimestamp="2024-11-12 20:56:42 +0000 UTC" firstStartedPulling="2024-11-12 20:57:12.588740373 +0000 UTC m=+47.663441472" lastFinishedPulling="2024-11-12 20:57:15.472045654 +0000 UTC m=+50.546746853" observedRunningTime="2024-11-12 20:57:16.436578533 +0000 UTC m=+51.511279632" watchObservedRunningTime="2024-11-12 20:57:16.437309642 +0000 UTC m=+51.512010741" Nov 12 20:57:16.883304 containerd[1712]: time="2024-11-12T20:57:16.883258081Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:16.885356 containerd[1712]: time="2024-11-12T20:57:16.885302307Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:57:16.890114 containerd[1712]: time="2024-11-12T20:57:16.890072066Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:16.895303 containerd[1712]: time="2024-11-12T20:57:16.895250430Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:57:16.896364 containerd[1712]: time="2024-11-12T20:57:16.895934339Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 1.422197863s" Nov 12 20:57:16.896364 containerd[1712]: time="2024-11-12T20:57:16.895974239Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:57:16.898552 containerd[1712]: time="2024-11-12T20:57:16.898519571Z" level=info msg="CreateContainer within sandbox \"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:57:16.934698 containerd[1712]: time="2024-11-12T20:57:16.934656319Z" level=info msg="CreateContainer within sandbox \"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6a729a2a24999180d28f9829bc43346683f6cc4e1dc8f3ba7b8f55d8257ab01e\"" Nov 12 20:57:16.935388 containerd[1712]: time="2024-11-12T20:57:16.935280127Z" level=info msg="StartContainer for \"6a729a2a24999180d28f9829bc43346683f6cc4e1dc8f3ba7b8f55d8257ab01e\"" Nov 12 20:57:16.971350 systemd[1]: Started cri-containerd-6a729a2a24999180d28f9829bc43346683f6cc4e1dc8f3ba7b8f55d8257ab01e.scope - libcontainer container 6a729a2a24999180d28f9829bc43346683f6cc4e1dc8f3ba7b8f55d8257ab01e. Nov 12 20:57:17.001216 containerd[1712]: time="2024-11-12T20:57:17.000636839Z" level=info msg="StartContainer for \"6a729a2a24999180d28f9829bc43346683f6cc4e1dc8f3ba7b8f55d8257ab01e\" returns successfully" Nov 12 20:57:17.251231 kubelet[3255]: I1112 20:57:17.251176 3255 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:57:17.251231 kubelet[3255]: I1112 20:57:17.251235 3255 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:57:17.428943 kubelet[3255]: I1112 20:57:17.428514 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:17.442632 kubelet[3255]: I1112 20:57:17.442595 3255 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-fq22j" podStartSLOduration=27.514453266 podStartE2EDuration="35.442545027s" podCreationTimestamp="2024-11-12 20:56:42 +0000 UTC" firstStartedPulling="2024-11-12 20:57:08.968482185 +0000 UTC m=+44.043183284" lastFinishedPulling="2024-11-12 20:57:16.896573946 +0000 UTC m=+51.971275045" observedRunningTime="2024-11-12 20:57:17.441237611 +0000 UTC m=+52.515938710" watchObservedRunningTime="2024-11-12 20:57:17.442545027 +0000 UTC m=+52.517246126" Nov 12 20:57:18.690530 kubelet[3255]: I1112 20:57:18.690264 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:18.804832 kubelet[3255]: I1112 20:57:18.804063 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:21.666320 update_engine[1692]: I20241112 20:57:21.666251 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:57:21.666777 update_engine[1692]: I20241112 20:57:21.666548 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:57:21.666898 update_engine[1692]: I20241112 20:57:21.666795 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:57:21.792284 update_engine[1692]: E20241112 20:57:21.792174 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:57:21.792447 update_engine[1692]: I20241112 20:57:21.792316 1692 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Nov 12 20:57:25.162268 containerd[1712]: time="2024-11-12T20:57:25.162212845Z" level=info msg="StopPodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\"" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.193 [WARNING][5583] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1a6a914b-2623-44ba-a104-8006201e1852", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3", Pod:"coredns-76f75df574-ql4bt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74bc018f327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.193 [INFO][5583] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.193 [INFO][5583] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" iface="eth0" netns="" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.193 [INFO][5583] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.193 [INFO][5583] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.211 [INFO][5589] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.211 [INFO][5589] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.211 [INFO][5589] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.217 [WARNING][5589] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.217 [INFO][5589] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.218 [INFO][5589] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.221251 containerd[1712]: 2024-11-12 20:57:25.219 [INFO][5583] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.221251 containerd[1712]: time="2024-11-12T20:57:25.220880356Z" level=info msg="TearDown network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" successfully" Nov 12 20:57:25.221251 containerd[1712]: time="2024-11-12T20:57:25.220912956Z" level=info msg="StopPodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" returns successfully" Nov 12 20:57:25.221970 containerd[1712]: time="2024-11-12T20:57:25.221491063Z" level=info msg="RemovePodSandbox for \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\"" Nov 12 20:57:25.221970 containerd[1712]: time="2024-11-12T20:57:25.221527263Z" level=info msg="Forcibly stopping sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\"" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.256 [WARNING][5607] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"1a6a914b-2623-44ba-a104-8006201e1852", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"c033a59d36653ddd0f0aae141b260dcd76a67397e37793d0718892f3889aa7d3", Pod:"coredns-76f75df574-ql4bt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali74bc018f327", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.256 [INFO][5607] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.256 [INFO][5607] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" iface="eth0" netns="" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.256 [INFO][5607] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.256 [INFO][5607] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.274 [INFO][5613] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.274 [INFO][5613] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.274 [INFO][5613] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.279 [WARNING][5613] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.279 [INFO][5613] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" HandleID="k8s-pod-network.e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--ql4bt-eth0" Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.281 [INFO][5613] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.283277 containerd[1712]: 2024-11-12 20:57:25.282 [INFO][5607] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47" Nov 12 20:57:25.283277 containerd[1712]: time="2024-11-12T20:57:25.283236611Z" level=info msg="TearDown network for sandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" successfully" Nov 12 20:57:25.291797 containerd[1712]: time="2024-11-12T20:57:25.291682214Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:25.291797 containerd[1712]: time="2024-11-12T20:57:25.291756614Z" level=info msg="RemovePodSandbox \"e1bc5f54ad62db89d2171605b55b2e71a2867040b815d99f3b1fe684e24f4a47\" returns successfully" Nov 12 20:57:25.292388 containerd[1712]: time="2024-11-12T20:57:25.292359222Z" level=info msg="StopPodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\"" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.322 [WARNING][5631] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31b5c95e-dc8f-4bde-b086-78c0f44c5289", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb", Pod:"coredns-76f75df574-zbhth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa8d09fbe38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.323 [INFO][5631] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.323 [INFO][5631] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" iface="eth0" netns="" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.323 [INFO][5631] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.323 [INFO][5631] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.344 [INFO][5637] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.344 [INFO][5637] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.344 [INFO][5637] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.352 [WARNING][5637] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.352 [INFO][5637] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.355 [INFO][5637] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.360746 containerd[1712]: 2024-11-12 20:57:25.358 [INFO][5631] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.361577 containerd[1712]: time="2024-11-12T20:57:25.360855952Z" level=info msg="TearDown network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" successfully" Nov 12 20:57:25.361577 containerd[1712]: time="2024-11-12T20:57:25.360916652Z" level=info msg="StopPodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" returns successfully" Nov 12 20:57:25.361577 containerd[1712]: time="2024-11-12T20:57:25.361430859Z" level=info msg="RemovePodSandbox for \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\"" Nov 12 20:57:25.361577 containerd[1712]: time="2024-11-12T20:57:25.361463159Z" level=info msg="Forcibly stopping sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\"" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.401 [WARNING][5657] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0", GenerateName:"coredns-76f75df574-", Namespace:"kube-system", SelfLink:"", UID:"31b5c95e-dc8f-4bde-b086-78c0f44c5289", ResourceVersion:"772", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"76f75df574", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"b7c543e2975f71a5c8a4cd16b3d1150ee7ceba3949eaf51ddfe360fce34fcebb", Pod:"coredns-76f75df574-zbhth", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.3.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaa8d09fbe38", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.402 [INFO][5657] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.402 [INFO][5657] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" iface="eth0" netns="" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.402 [INFO][5657] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.402 [INFO][5657] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.419 [INFO][5664] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.419 [INFO][5664] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.419 [INFO][5664] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.425 [WARNING][5664] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.425 [INFO][5664] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" HandleID="k8s-pod-network.12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-coredns--76f75df574--zbhth-eth0" Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.427 [INFO][5664] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.429363 containerd[1712]: 2024-11-12 20:57:25.428 [INFO][5657] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4" Nov 12 20:57:25.430150 containerd[1712]: time="2024-11-12T20:57:25.429331582Z" level=info msg="TearDown network for sandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" successfully" Nov 12 20:57:25.438502 containerd[1712]: time="2024-11-12T20:57:25.438460892Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:25.438614 containerd[1712]: time="2024-11-12T20:57:25.438524493Z" level=info msg="RemovePodSandbox \"12e75bc500c68d80f4c5de2e98dfd9db6e137d77d0ddd9b0c0ab7ee0aec8f4d4\" returns successfully" Nov 12 20:57:25.439107 containerd[1712]: time="2024-11-12T20:57:25.439077700Z" level=info msg="StopPodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\"" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.472 [WARNING][5683] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb", Pod:"calico-apiserver-d9f56cd6-v2g28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3f2a70a269", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.472 [INFO][5683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.472 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" iface="eth0" netns="" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.472 [INFO][5683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.472 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.490 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.490 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.490 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.496 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.496 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.497 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.499508 containerd[1712]: 2024-11-12 20:57:25.498 [INFO][5683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.500298 containerd[1712]: time="2024-11-12T20:57:25.499552732Z" level=info msg="TearDown network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" successfully" Nov 12 20:57:25.500298 containerd[1712]: time="2024-11-12T20:57:25.499585933Z" level=info msg="StopPodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" returns successfully" Nov 12 20:57:25.500298 containerd[1712]: time="2024-11-12T20:57:25.500242041Z" level=info msg="RemovePodSandbox for \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\"" Nov 12 20:57:25.500298 containerd[1712]: time="2024-11-12T20:57:25.500276441Z" level=info msg="Forcibly stopping sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\"" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.530 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"3d8a8d61-0a3b-4bd2-90b5-7da34e2b6482", ResourceVersion:"859", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"a24951a1f7c85d03ea6801ab1a3e192301f3efb0901ba41b0549da009409f8eb", Pod:"calico-apiserver-d9f56cd6-v2g28", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib3f2a70a269", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.530 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.530 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" iface="eth0" netns="" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.530 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.530 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.548 [INFO][5713] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.549 [INFO][5713] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.549 [INFO][5713] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.554 [WARNING][5713] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.554 [INFO][5713] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" HandleID="k8s-pod-network.56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--v2g28-eth0" Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.555 [INFO][5713] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.557323 containerd[1712]: 2024-11-12 20:57:25.556 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725" Nov 12 20:57:25.558057 containerd[1712]: time="2024-11-12T20:57:25.557341933Z" level=info msg="TearDown network for sandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" successfully" Nov 12 20:57:25.564767 containerd[1712]: time="2024-11-12T20:57:25.564722822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:25.564888 containerd[1712]: time="2024-11-12T20:57:25.564790023Z" level=info msg="RemovePodSandbox \"56ac413af75339c45a9bd9473d94e66ba2b0aa813e4edcbf922d837d43060725\" returns successfully" Nov 12 20:57:25.565335 containerd[1712]: time="2024-11-12T20:57:25.565303029Z" level=info msg="StopPodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\"" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.597 [WARNING][5731] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5c951fd-1efc-4251-a9ef-b6d54bc7597b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e", Pod:"calico-apiserver-d9f56cd6-j2pht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0d3b55592f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.597 [INFO][5731] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.597 [INFO][5731] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" iface="eth0" netns="" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.597 [INFO][5731] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.597 [INFO][5731] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.620 [INFO][5737] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.620 [INFO][5737] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.620 [INFO][5737] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.626 [WARNING][5737] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.626 [INFO][5737] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.627 [INFO][5737] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.629849 containerd[1712]: 2024-11-12 20:57:25.628 [INFO][5731] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.630700 containerd[1712]: time="2024-11-12T20:57:25.629878712Z" level=info msg="TearDown network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" successfully" Nov 12 20:57:25.630700 containerd[1712]: time="2024-11-12T20:57:25.629909312Z" level=info msg="StopPodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" returns successfully" Nov 12 20:57:25.630700 containerd[1712]: time="2024-11-12T20:57:25.630496919Z" level=info msg="RemovePodSandbox for \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\"" Nov 12 20:57:25.630700 containerd[1712]: time="2024-11-12T20:57:25.630533920Z" level=info msg="Forcibly stopping sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\"" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.678 [WARNING][5755] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0", GenerateName:"calico-apiserver-d9f56cd6-", Namespace:"calico-apiserver", SelfLink:"", UID:"c5c951fd-1efc-4251-a9ef-b6d54bc7597b", ResourceVersion:"832", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"d9f56cd6", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"8f0b88454f286221cc3bd2c45c07a2aeaeb7f7beb67ec3c50e7f2b7461f09a6e", Pod:"calico-apiserver-d9f56cd6-j2pht", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.3.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calie0d3b55592f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.678 [INFO][5755] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.678 [INFO][5755] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" iface="eth0" netns="" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.678 [INFO][5755] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.678 [INFO][5755] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.701 [INFO][5761] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.701 [INFO][5761] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.701 [INFO][5761] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.707 [WARNING][5761] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.707 [INFO][5761] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" HandleID="k8s-pod-network.0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--apiserver--d9f56cd6--j2pht-eth0" Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.709 [INFO][5761] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.711684 containerd[1712]: 2024-11-12 20:57:25.710 [INFO][5755] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284" Nov 12 20:57:25.711684 containerd[1712]: time="2024-11-12T20:57:25.711635502Z" level=info msg="TearDown network for sandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" successfully" Nov 12 20:57:25.720432 containerd[1712]: time="2024-11-12T20:57:25.720250507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:25.720679 containerd[1712]: time="2024-11-12T20:57:25.720324808Z" level=info msg="RemovePodSandbox \"0c5122889b2f40d3a85927a37241a1fc649c967e9e9c8a212e4ad3bc86f44284\" returns successfully" Nov 12 20:57:25.721795 containerd[1712]: time="2024-11-12T20:57:25.721489722Z" level=info msg="StopPodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\"" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.753 [WARNING][5780] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f56efb82-9a7d-420d-9381-5bbb29af7152", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e", Pod:"csi-node-driver-fq22j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali806f2408aee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.753 [INFO][5780] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.753 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" iface="eth0" netns="" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.753 [INFO][5780] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.753 [INFO][5780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.772 [INFO][5786] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.773 [INFO][5786] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.773 [INFO][5786] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.778 [WARNING][5786] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.778 [INFO][5786] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.779 [INFO][5786] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.782222 containerd[1712]: 2024-11-12 20:57:25.781 [INFO][5780] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.782873 containerd[1712]: time="2024-11-12T20:57:25.782252858Z" level=info msg="TearDown network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" successfully" Nov 12 20:57:25.782873 containerd[1712]: time="2024-11-12T20:57:25.782283058Z" level=info msg="StopPodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" returns successfully" Nov 12 20:57:25.782873 containerd[1712]: time="2024-11-12T20:57:25.782784264Z" level=info msg="RemovePodSandbox for \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\"" Nov 12 20:57:25.782873 containerd[1712]: time="2024-11-12T20:57:25.782817765Z" level=info msg="Forcibly stopping sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\"" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.814 [WARNING][5804] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f56efb82-9a7d-420d-9381-5bbb29af7152", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"64dd8495dc", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"428454e5343dfb525a0c3dabc68ba6cb419d5059815aa2965c428205bea6005e", Pod:"csi-node-driver-fq22j", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.3.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali806f2408aee", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.814 [INFO][5804] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.814 [INFO][5804] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" iface="eth0" netns="" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.814 [INFO][5804] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.814 [INFO][5804] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.831 [INFO][5810] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.832 [INFO][5810] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.832 [INFO][5810] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.838 [WARNING][5810] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.838 [INFO][5810] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" HandleID="k8s-pod-network.6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-csi--node--driver--fq22j-eth0" Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.839 [INFO][5810] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.841334 containerd[1712]: 2024-11-12 20:57:25.840 [INFO][5804] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a" Nov 12 20:57:25.841984 containerd[1712]: time="2024-11-12T20:57:25.841410775Z" level=info msg="TearDown network for sandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" successfully" Nov 12 20:57:25.852092 containerd[1712]: time="2024-11-12T20:57:25.852058804Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:25.852223 containerd[1712]: time="2024-11-12T20:57:25.852137705Z" level=info msg="RemovePodSandbox \"6ced421df5e53c9cd0fde1266af69204b852dca6644f52b6f80d5856d6ce6e6a\" returns successfully" Nov 12 20:57:25.852730 containerd[1712]: time="2024-11-12T20:57:25.852702112Z" level=info msg="StopPodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\"" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.887 [WARNING][5828] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0", GenerateName:"calico-kube-controllers-7b77f44dcc-", Namespace:"calico-system", SelfLink:"", UID:"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b77f44dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0", Pod:"calico-kube-controllers-7b77f44dcc-47lhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20908ffdcb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.887 [INFO][5828] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.887 [INFO][5828] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" iface="eth0" netns="" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.887 [INFO][5828] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.887 [INFO][5828] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.906 [INFO][5834] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.906 [INFO][5834] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.906 [INFO][5834] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.910 [WARNING][5834] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.910 [INFO][5834] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.911 [INFO][5834] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.913633 containerd[1712]: 2024-11-12 20:57:25.912 [INFO][5828] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.914627 containerd[1712]: time="2024-11-12T20:57:25.913742351Z" level=info msg="TearDown network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" successfully" Nov 12 20:57:25.914627 containerd[1712]: time="2024-11-12T20:57:25.913790852Z" level=info msg="StopPodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" returns successfully" Nov 12 20:57:25.914627 containerd[1712]: time="2024-11-12T20:57:25.914389359Z" level=info msg="RemovePodSandbox for \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\"" Nov 12 20:57:25.914627 containerd[1712]: time="2024-11-12T20:57:25.914423960Z" level=info msg="Forcibly stopping sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\"" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.951 [WARNING][5852] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0", GenerateName:"calico-kube-controllers-7b77f44dcc-", Namespace:"calico-system", SelfLink:"", UID:"5a2a87e1-3ea6-4f5b-a14a-35a0a9288ab2", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 56, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7b77f44dcc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-d8aa37ea01", ContainerID:"259d26bcb387e20894ca7d26d9d5672365208ed1025d18107dfee7adf37406c0", Pod:"calico-kube-controllers-7b77f44dcc-47lhq", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.3.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali20908ffdcb8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.951 [INFO][5852] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.951 [INFO][5852] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" iface="eth0" netns="" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.951 [INFO][5852] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.951 [INFO][5852] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.969 [INFO][5858] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.969 [INFO][5858] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.969 [INFO][5858] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.974 [WARNING][5858] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.975 [INFO][5858] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" HandleID="k8s-pod-network.df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Workload="ci--4081.2.0--a--d8aa37ea01-k8s-calico--kube--controllers--7b77f44dcc--47lhq-eth0" Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.977 [INFO][5858] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:57:25.978978 containerd[1712]: 2024-11-12 20:57:25.977 [INFO][5852] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde" Nov 12 20:57:25.978978 containerd[1712]: time="2024-11-12T20:57:25.978890841Z" level=info msg="TearDown network for sandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" successfully" Nov 12 20:57:25.991154 containerd[1712]: time="2024-11-12T20:57:25.991097989Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:57:25.991314 containerd[1712]: time="2024-11-12T20:57:25.991178190Z" level=info msg="RemovePodSandbox \"df9bf2dd8d17018907f429dc27b30f09eda7668f122c331986c36bd751b62cde\" returns successfully" Nov 12 20:57:31.124652 kubelet[3255]: I1112 20:57:31.124399 3255 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:57:31.675316 update_engine[1692]: I20241112 20:57:31.674316 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:57:31.675316 update_engine[1692]: I20241112 20:57:31.674978 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:57:31.675316 update_engine[1692]: I20241112 20:57:31.675235 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:57:31.695561 update_engine[1692]: E20241112 20:57:31.695508 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:57:31.695714 update_engine[1692]: I20241112 20:57:31.695595 1692 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 20:57:31.695714 update_engine[1692]: I20241112 20:57:31.695609 1692 omaha_request_action.cc:617] Omaha request response: Nov 12 20:57:31.695714 update_engine[1692]: E20241112 20:57:31.695705 1692 omaha_request_action.cc:636] Omaha request network transfer failed. Nov 12 20:57:31.695928 update_engine[1692]: I20241112 20:57:31.695736 1692 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Nov 12 20:57:31.695928 update_engine[1692]: I20241112 20:57:31.695747 1692 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:57:31.695928 update_engine[1692]: I20241112 20:57:31.695756 1692 update_attempter.cc:306] Processing Done. Nov 12 20:57:31.695928 update_engine[1692]: E20241112 20:57:31.695776 1692 update_attempter.cc:619] Update failed. Nov 12 20:57:31.695928 update_engine[1692]: I20241112 20:57:31.695785 1692 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Nov 12 20:57:31.695928 update_engine[1692]: I20241112 20:57:31.695796 1692 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Nov 12 20:57:31.695928 update_engine[1692]: I20241112 20:57:31.695806 1692 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Nov 12 20:57:31.696394 update_engine[1692]: I20241112 20:57:31.696074 1692 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Nov 12 20:57:31.696394 update_engine[1692]: I20241112 20:57:31.696128 1692 omaha_request_action.cc:271] Posting an Omaha request to disabled Nov 12 20:57:31.696394 update_engine[1692]: I20241112 20:57:31.696137 1692 omaha_request_action.cc:272] Request: Nov 12 20:57:31.696394 update_engine[1692]: Nov 12 20:57:31.696394 update_engine[1692]: Nov 12 20:57:31.696394 update_engine[1692]: Nov 12 20:57:31.696394 update_engine[1692]: Nov 12 20:57:31.696394 update_engine[1692]: Nov 12 20:57:31.696394 update_engine[1692]: Nov 12 20:57:31.696394 update_engine[1692]: I20241112 20:57:31.696147 1692 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Nov 12 20:57:31.696953 update_engine[1692]: I20241112 20:57:31.696403 1692 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Nov 12 20:57:31.696953 update_engine[1692]: I20241112 20:57:31.696632 1692 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Nov 12 20:57:31.697019 locksmithd[1755]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Nov 12 20:57:31.716658 update_engine[1692]: E20241112 20:57:31.716567 1692 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Nov 12 20:57:31.716658 update_engine[1692]: I20241112 20:57:31.716634 1692 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Nov 12 20:57:31.716658 update_engine[1692]: I20241112 20:57:31.716647 1692 omaha_request_action.cc:617] Omaha request response: Nov 12 20:57:31.716658 update_engine[1692]: I20241112 20:57:31.716658 1692 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:57:31.716890 update_engine[1692]: I20241112 20:57:31.716665 1692 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Nov 12 20:57:31.716890 update_engine[1692]: I20241112 20:57:31.716670 1692 update_attempter.cc:306] Processing Done. Nov 12 20:57:31.716890 update_engine[1692]: I20241112 20:57:31.716678 1692 update_attempter.cc:310] Error event sent. Nov 12 20:57:31.716890 update_engine[1692]: I20241112 20:57:31.716689 1692 update_check_scheduler.cc:74] Next update check in 44m26s Nov 12 20:57:31.717056 locksmithd[1755]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Nov 12 20:58:29.217484 systemd[1]: Started sshd@7-10.200.8.15:22-10.200.16.10:58210.service - OpenSSH per-connection server daemon (10.200.16.10:58210). Nov 12 20:58:29.838233 sshd[5998]: Accepted publickey for core from 10.200.16.10 port 58210 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:29.840298 sshd[5998]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:29.865894 systemd[1]: run-containerd-runc-k8s.io-537c5bc4df77fc0a7e95d482aed939d3ce1dbc5b7e4285f4a0ab1681cbf1cbbd-runc.EkitPn.mount: Deactivated successfully. Nov 12 20:58:29.872695 systemd-logind[1689]: New session 10 of user core. Nov 12 20:58:29.879340 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:58:30.347074 sshd[5998]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:30.351822 systemd[1]: sshd@7-10.200.8.15:22-10.200.16.10:58210.service: Deactivated successfully. Nov 12 20:58:30.354478 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:58:30.355953 systemd-logind[1689]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:58:30.357120 systemd-logind[1689]: Removed session 10. Nov 12 20:58:35.462488 systemd[1]: Started sshd@8-10.200.8.15:22-10.200.16.10:58222.service - OpenSSH per-connection server daemon (10.200.16.10:58222). Nov 12 20:58:36.080615 sshd[6056]: Accepted publickey for core from 10.200.16.10 port 58222 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:36.082234 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:36.086959 systemd-logind[1689]: New session 11 of user core. Nov 12 20:58:36.091392 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:58:36.583740 sshd[6056]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:36.588073 systemd[1]: sshd@8-10.200.8.15:22-10.200.16.10:58222.service: Deactivated successfully. Nov 12 20:58:36.590792 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:58:36.591767 systemd-logind[1689]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:58:36.592848 systemd-logind[1689]: Removed session 11. Nov 12 20:58:41.695371 systemd[1]: Started sshd@9-10.200.8.15:22-10.200.16.10:42244.service - OpenSSH per-connection server daemon (10.200.16.10:42244). Nov 12 20:58:42.317824 sshd[6084]: Accepted publickey for core from 10.200.16.10 port 42244 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:42.319431 sshd[6084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:42.324056 systemd-logind[1689]: New session 12 of user core. Nov 12 20:58:42.329342 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:58:42.825781 sshd[6084]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:42.829158 systemd[1]: sshd@9-10.200.8.15:22-10.200.16.10:42244.service: Deactivated successfully. Nov 12 20:58:42.831625 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:58:42.833583 systemd-logind[1689]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:58:42.834991 systemd-logind[1689]: Removed session 12. Nov 12 20:58:47.943492 systemd[1]: Started sshd@10-10.200.8.15:22-10.200.16.10:42254.service - OpenSSH per-connection server daemon (10.200.16.10:42254). Nov 12 20:58:48.562445 sshd[6103]: Accepted publickey for core from 10.200.16.10 port 42254 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:48.564602 sshd[6103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:48.569274 systemd-logind[1689]: New session 13 of user core. Nov 12 20:58:48.573357 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:58:49.064818 sshd[6103]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:49.069348 systemd[1]: sshd@10-10.200.8.15:22-10.200.16.10:42254.service: Deactivated successfully. Nov 12 20:58:49.071663 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:58:49.072464 systemd-logind[1689]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:58:49.073567 systemd-logind[1689]: Removed session 13. Nov 12 20:58:49.179493 systemd[1]: Started sshd@11-10.200.8.15:22-10.200.16.10:42892.service - OpenSSH per-connection server daemon (10.200.16.10:42892). Nov 12 20:58:49.798349 sshd[6136]: Accepted publickey for core from 10.200.16.10 port 42892 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:49.800254 sshd[6136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:49.805688 systemd-logind[1689]: New session 14 of user core. Nov 12 20:58:49.811427 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:58:50.327900 sshd[6136]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:50.332542 systemd[1]: sshd@11-10.200.8.15:22-10.200.16.10:42892.service: Deactivated successfully. Nov 12 20:58:50.335166 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:58:50.336358 systemd-logind[1689]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:58:50.337485 systemd-logind[1689]: Removed session 14. Nov 12 20:58:50.445525 systemd[1]: Started sshd@12-10.200.8.15:22-10.200.16.10:42896.service - OpenSSH per-connection server daemon (10.200.16.10:42896). Nov 12 20:58:51.064728 sshd[6146]: Accepted publickey for core from 10.200.16.10 port 42896 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:51.066559 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:51.072449 systemd-logind[1689]: New session 15 of user core. Nov 12 20:58:51.075356 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:58:51.568676 sshd[6146]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:51.575438 systemd[1]: sshd@12-10.200.8.15:22-10.200.16.10:42896.service: Deactivated successfully. Nov 12 20:58:51.578517 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:58:51.580309 systemd-logind[1689]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:58:51.581680 systemd-logind[1689]: Removed session 15. Nov 12 20:58:56.678465 systemd[1]: Started sshd@13-10.200.8.15:22-10.200.16.10:42906.service - OpenSSH per-connection server daemon (10.200.16.10:42906). Nov 12 20:58:57.303714 sshd[6163]: Accepted publickey for core from 10.200.16.10 port 42906 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:58:57.305288 sshd[6163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:58:57.310507 systemd-logind[1689]: New session 16 of user core. Nov 12 20:58:57.316371 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:58:57.820911 sshd[6163]: pam_unix(sshd:session): session closed for user core Nov 12 20:58:57.824513 systemd[1]: sshd@13-10.200.8.15:22-10.200.16.10:42906.service: Deactivated successfully. Nov 12 20:58:57.827172 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:58:57.828981 systemd-logind[1689]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:58:57.830255 systemd-logind[1689]: Removed session 16. Nov 12 20:59:02.937565 systemd[1]: Started sshd@14-10.200.8.15:22-10.200.16.10:52498.service - OpenSSH per-connection server daemon (10.200.16.10:52498). Nov 12 20:59:03.567394 sshd[6198]: Accepted publickey for core from 10.200.16.10 port 52498 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:03.568926 sshd[6198]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:03.572960 systemd-logind[1689]: New session 17 of user core. Nov 12 20:59:03.579565 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:59:04.068535 sshd[6198]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:04.071576 systemd[1]: sshd@14-10.200.8.15:22-10.200.16.10:52498.service: Deactivated successfully. Nov 12 20:59:04.074027 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:59:04.075732 systemd-logind[1689]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:59:04.076749 systemd-logind[1689]: Removed session 17. Nov 12 20:59:09.184540 systemd[1]: Started sshd@15-10.200.8.15:22-10.200.16.10:43810.service - OpenSSH per-connection server daemon (10.200.16.10:43810). Nov 12 20:59:09.809478 sshd[6214]: Accepted publickey for core from 10.200.16.10 port 43810 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:09.811204 sshd[6214]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:09.816077 systemd-logind[1689]: New session 18 of user core. Nov 12 20:59:09.820376 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:59:10.307247 sshd[6214]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:10.311882 systemd[1]: sshd@15-10.200.8.15:22-10.200.16.10:43810.service: Deactivated successfully. Nov 12 20:59:10.314173 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:59:10.315537 systemd-logind[1689]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:59:10.317043 systemd-logind[1689]: Removed session 18. Nov 12 20:59:15.421370 systemd[1]: Started sshd@16-10.200.8.15:22-10.200.16.10:43814.service - OpenSSH per-connection server daemon (10.200.16.10:43814). Nov 12 20:59:16.051770 sshd[6227]: Accepted publickey for core from 10.200.16.10 port 43814 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:16.053663 sshd[6227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:16.060309 systemd-logind[1689]: New session 19 of user core. Nov 12 20:59:16.065370 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:59:16.598666 sshd[6227]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:16.603698 systemd-logind[1689]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:59:16.604554 systemd[1]: sshd@16-10.200.8.15:22-10.200.16.10:43814.service: Deactivated successfully. Nov 12 20:59:16.608021 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:59:16.612241 systemd-logind[1689]: Removed session 19. Nov 12 20:59:16.716453 systemd[1]: Started sshd@17-10.200.8.15:22-10.200.16.10:43826.service - OpenSSH per-connection server daemon (10.200.16.10:43826). Nov 12 20:59:17.340938 sshd[6240]: Accepted publickey for core from 10.200.16.10 port 43826 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:17.342869 sshd[6240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:17.347859 systemd-logind[1689]: New session 20 of user core. Nov 12 20:59:17.350368 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:59:17.913050 sshd[6240]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:17.916553 systemd[1]: sshd@17-10.200.8.15:22-10.200.16.10:43826.service: Deactivated successfully. Nov 12 20:59:17.919117 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:59:17.921085 systemd-logind[1689]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:59:17.922480 systemd-logind[1689]: Removed session 20. Nov 12 20:59:18.030524 systemd[1]: Started sshd@18-10.200.8.15:22-10.200.16.10:43830.service - OpenSSH per-connection server daemon (10.200.16.10:43830). Nov 12 20:59:18.650452 sshd[6250]: Accepted publickey for core from 10.200.16.10 port 43830 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:18.651968 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:18.656584 systemd-logind[1689]: New session 21 of user core. Nov 12 20:59:18.659367 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:59:20.881687 sshd[6250]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:20.885421 systemd[1]: sshd@18-10.200.8.15:22-10.200.16.10:43830.service: Deactivated successfully. Nov 12 20:59:20.887856 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:59:20.890030 systemd-logind[1689]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:59:20.891335 systemd-logind[1689]: Removed session 21. Nov 12 20:59:20.992315 systemd[1]: Started sshd@19-10.200.8.15:22-10.200.16.10:60638.service - OpenSSH per-connection server daemon (10.200.16.10:60638). Nov 12 20:59:21.627612 sshd[6288]: Accepted publickey for core from 10.200.16.10 port 60638 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:21.629118 sshd[6288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:21.633245 systemd-logind[1689]: New session 22 of user core. Nov 12 20:59:21.640350 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:59:22.234306 sshd[6288]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:22.238959 systemd[1]: sshd@19-10.200.8.15:22-10.200.16.10:60638.service: Deactivated successfully. Nov 12 20:59:22.242594 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:59:22.243529 systemd-logind[1689]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:59:22.244576 systemd-logind[1689]: Removed session 22. Nov 12 20:59:22.350607 systemd[1]: Started sshd@20-10.200.8.15:22-10.200.16.10:60652.service - OpenSSH per-connection server daemon (10.200.16.10:60652). Nov 12 20:59:22.969755 sshd[6299]: Accepted publickey for core from 10.200.16.10 port 60652 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:22.971655 sshd[6299]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:22.977347 systemd-logind[1689]: New session 23 of user core. Nov 12 20:59:22.984570 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:59:23.465108 sshd[6299]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:23.469667 systemd-logind[1689]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:59:23.470516 systemd[1]: sshd@20-10.200.8.15:22-10.200.16.10:60652.service: Deactivated successfully. Nov 12 20:59:23.472638 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:59:23.473739 systemd-logind[1689]: Removed session 23. Nov 12 20:59:28.576289 systemd[1]: Started sshd@21-10.200.8.15:22-10.200.16.10:53338.service - OpenSSH per-connection server daemon (10.200.16.10:53338). Nov 12 20:59:29.203020 sshd[6313]: Accepted publickey for core from 10.200.16.10 port 53338 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:29.204578 sshd[6313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:29.208477 systemd-logind[1689]: New session 24 of user core. Nov 12 20:59:29.213585 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:59:29.700445 sshd[6313]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:29.704666 systemd[1]: sshd@21-10.200.8.15:22-10.200.16.10:53338.service: Deactivated successfully. Nov 12 20:59:29.709228 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:59:29.710030 systemd-logind[1689]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:59:29.711106 systemd-logind[1689]: Removed session 24. Nov 12 20:59:33.246753 systemd[1]: run-containerd-runc-k8s.io-2874c349e32675dddceaad07447b9210192d4c52cf79b815f0fcebf92d96211f-runc.EHvHv1.mount: Deactivated successfully. Nov 12 20:59:34.819316 systemd[1]: Started sshd@22-10.200.8.15:22-10.200.16.10:53350.service - OpenSSH per-connection server daemon (10.200.16.10:53350). Nov 12 20:59:35.447982 sshd[6369]: Accepted publickey for core from 10.200.16.10 port 53350 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:35.449490 sshd[6369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:35.453390 systemd-logind[1689]: New session 25 of user core. Nov 12 20:59:35.460039 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:59:35.944904 sshd[6369]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:35.949209 systemd[1]: sshd@22-10.200.8.15:22-10.200.16.10:53350.service: Deactivated successfully. Nov 12 20:59:35.951417 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:59:35.952267 systemd-logind[1689]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:59:35.953679 systemd-logind[1689]: Removed session 25. Nov 12 20:59:41.059656 systemd[1]: Started sshd@23-10.200.8.15:22-10.200.16.10:50072.service - OpenSSH per-connection server daemon (10.200.16.10:50072). Nov 12 20:59:41.692603 sshd[6384]: Accepted publickey for core from 10.200.16.10 port 50072 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:41.694057 sshd[6384]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:41.698704 systemd-logind[1689]: New session 26 of user core. Nov 12 20:59:41.701370 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:59:42.193747 sshd[6384]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:42.197702 systemd[1]: sshd@23-10.200.8.15:22-10.200.16.10:50072.service: Deactivated successfully. Nov 12 20:59:42.199810 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:59:42.200693 systemd-logind[1689]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:59:42.202108 systemd-logind[1689]: Removed session 26. Nov 12 20:59:47.308489 systemd[1]: Started sshd@24-10.200.8.15:22-10.200.16.10:50082.service - OpenSSH per-connection server daemon (10.200.16.10:50082). Nov 12 20:59:47.927620 sshd[6404]: Accepted publickey for core from 10.200.16.10 port 50082 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:47.929437 sshd[6404]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:47.936434 systemd-logind[1689]: New session 27 of user core. Nov 12 20:59:47.943352 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:59:48.428002 sshd[6404]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:48.432100 systemd[1]: sshd@24-10.200.8.15:22-10.200.16.10:50082.service: Deactivated successfully. Nov 12 20:59:48.434779 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:59:48.436445 systemd-logind[1689]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:59:48.437848 systemd-logind[1689]: Removed session 27. Nov 12 20:59:53.561485 systemd[1]: Started sshd@25-10.200.8.15:22-10.200.16.10:48446.service - OpenSSH per-connection server daemon (10.200.16.10:48446). Nov 12 20:59:54.181219 sshd[6439]: Accepted publickey for core from 10.200.16.10 port 48446 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:59:54.182932 sshd[6439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:59:54.188719 systemd-logind[1689]: New session 28 of user core. Nov 12 20:59:54.194497 systemd[1]: Started session-28.scope - Session 28 of User core. Nov 12 20:59:54.684350 sshd[6439]: pam_unix(sshd:session): session closed for user core Nov 12 20:59:54.687525 systemd[1]: sshd@25-10.200.8.15:22-10.200.16.10:48446.service: Deactivated successfully. Nov 12 20:59:54.689823 systemd[1]: session-28.scope: Deactivated successfully. Nov 12 20:59:54.691310 systemd-logind[1689]: Session 28 logged out. Waiting for processes to exit. Nov 12 20:59:54.692881 systemd-logind[1689]: Removed session 28. Nov 12 20:59:59.806725 systemd[1]: Started sshd@26-10.200.8.15:22-10.200.16.10:53240.service - OpenSSH per-connection server daemon (10.200.16.10:53240). Nov 12 21:00:00.424816 sshd[6454]: Accepted publickey for core from 10.200.16.10 port 53240 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 21:00:00.426392 sshd[6454]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 21:00:00.431258 systemd-logind[1689]: New session 29 of user core. Nov 12 21:00:00.437393 systemd[1]: Started session-29.scope - Session 29 of User core. Nov 12 21:00:00.930876 sshd[6454]: pam_unix(sshd:session): session closed for user core Nov 12 21:00:00.934481 systemd[1]: sshd@26-10.200.8.15:22-10.200.16.10:53240.service: Deactivated successfully. Nov 12 21:00:00.937038 systemd[1]: session-29.scope: Deactivated successfully. Nov 12 21:00:00.938709 systemd-logind[1689]: Session 29 logged out. Waiting for processes to exit. Nov 12 21:00:00.939996 systemd-logind[1689]: Removed session 29.