Nov 12 20:53:14.074484 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:53:14.074520 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:14.074535 kernel: BIOS-provided physical RAM map: Nov 12 20:53:14.074547 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:53:14.074558 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 12 20:53:14.074570 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 12 20:53:14.074584 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 12 20:53:14.074599 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 12 20:53:14.074611 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 12 20:53:14.074622 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 12 20:53:14.075254 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 12 20:53:14.075277 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 12 20:53:14.075289 kernel: printk: bootconsole [earlyser0] enabled Nov 12 20:53:14.075302 kernel: NX (Execute Disable) protection: active Nov 12 20:53:14.075321 kernel: APIC: Static calls initialized Nov 12 20:53:14.075333 kernel: efi: EFI v2.7 by Microsoft Nov 12 20:53:14.075346 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Nov 12 20:53:14.075358 kernel: SMBIOS 3.1.0 present. Nov 12 20:53:14.075370 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 12 20:53:14.075382 kernel: Hypervisor detected: Microsoft Hyper-V Nov 12 20:53:14.075393 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 12 20:53:14.075407 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Nov 12 20:53:14.075421 kernel: Hyper-V: Nested features: 0x1e0101 Nov 12 20:53:14.075437 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 12 20:53:14.075457 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 12 20:53:14.075472 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:14.075486 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:14.075498 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 12 20:53:14.075511 kernel: tsc: Detected 2593.904 MHz processor Nov 12 20:53:14.075525 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:53:14.075538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:53:14.075551 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 12 20:53:14.075564 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:53:14.075580 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:53:14.075593 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 12 20:53:14.075606 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 12 20:53:14.075619 kernel: Using GB pages for direct mapping Nov 12 20:53:14.075631 kernel: Secure boot disabled Nov 12 20:53:14.075659 kernel: ACPI: Early table checksum verification disabled Nov 12 20:53:14.075672 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 12 20:53:14.075691 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075708 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075722 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 12 20:53:14.075735 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 12 20:53:14.075749 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075763 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075777 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075794 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075808 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075822 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075835 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075849 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 12 20:53:14.075863 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 12 20:53:14.075877 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 12 20:53:14.075891 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 12 20:53:14.075907 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 12 20:53:14.075921 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 12 20:53:14.075935 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 12 20:53:14.075949 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 12 20:53:14.075963 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 12 20:53:14.075977 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 12 20:53:14.075990 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:53:14.076004 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:53:14.076018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 12 20:53:14.076034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 12 20:53:14.076048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 12 20:53:14.076062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 12 20:53:14.076076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 12 20:53:14.076090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 12 20:53:14.076104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 12 20:53:14.076118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 12 20:53:14.076132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 12 20:53:14.076146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 12 20:53:14.076162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 12 20:53:14.076176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 12 20:53:14.076190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 12 20:53:14.076204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 12 20:53:14.076218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 12 20:53:14.076231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 12 20:53:14.076245 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 12 20:53:14.076259 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 12 20:53:14.076273 kernel: Zone ranges: Nov 12 20:53:14.076290 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:53:14.076304 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:53:14.076317 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:14.076331 kernel: Movable zone start for each node Nov 12 20:53:14.076345 kernel: Early memory node ranges Nov 12 20:53:14.076358 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:53:14.076372 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 12 20:53:14.076386 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 12 20:53:14.076399 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:14.076416 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 12 20:53:14.076430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:53:14.076443 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:53:14.076457 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 12 20:53:14.076471 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 12 20:53:14.076485 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 12 20:53:14.076498 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:53:14.076512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:53:14.076526 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:53:14.076543 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 12 20:53:14.076556 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:53:14.076571 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 12 20:53:14.076584 kernel: Booting paravirtualized kernel on Hyper-V Nov 12 20:53:14.076599 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:53:14.076613 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:53:14.076627 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:53:14.076648 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:53:14.076662 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:53:14.076679 kernel: Hyper-V: PV spinlocks enabled Nov 12 20:53:14.076692 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:53:14.076708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:14.076723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:53:14.076736 kernel: random: crng init done Nov 12 20:53:14.076750 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:53:14.076764 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:53:14.076778 kernel: Fallback order for Node 0: 0 Nov 12 20:53:14.076795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 12 20:53:14.076823 kernel: Policy zone: Normal Nov 12 20:53:14.076837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:53:14.076854 kernel: software IO TLB: area num 2. Nov 12 20:53:14.076869 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Nov 12 20:53:14.076884 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:53:14.076898 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:53:14.076913 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:53:14.076928 kernel: Dynamic Preempt: voluntary Nov 12 20:53:14.076942 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:53:14.076958 kernel: rcu: RCU event tracing is enabled. Nov 12 20:53:14.076976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:53:14.076991 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:53:14.077006 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:53:14.077020 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:53:14.077035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:53:14.077053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:53:14.077068 kernel: Using NULL legacy PIC Nov 12 20:53:14.077082 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 12 20:53:14.077097 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:53:14.077112 kernel: Console: colour dummy device 80x25 Nov 12 20:53:14.077126 kernel: printk: console [tty1] enabled Nov 12 20:53:14.077141 kernel: printk: console [ttyS0] enabled Nov 12 20:53:14.077155 kernel: printk: bootconsole [earlyser0] disabled Nov 12 20:53:14.077170 kernel: ACPI: Core revision 20230628 Nov 12 20:53:14.077185 kernel: Failed to register legacy timer interrupt Nov 12 20:53:14.077202 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:53:14.077216 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 12 20:53:14.077231 kernel: Hyper-V: Using IPI hypercalls Nov 12 20:53:14.077246 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 12 20:53:14.077260 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 12 20:53:14.077275 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 12 20:53:14.077290 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 12 20:53:14.077305 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 12 20:53:14.077320 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 12 20:53:14.077337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Nov 12 20:53:14.077352 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:53:14.077367 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:53:14.077382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:53:14.077396 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:53:14.077411 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:53:14.077425 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:53:14.077440 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:53:14.077455 kernel: RETBleed: Vulnerable Nov 12 20:53:14.077472 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:53:14.077487 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:14.077502 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:14.077516 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:53:14.077531 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:53:14.077545 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:53:14.077560 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:53:14.077575 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:53:14.077589 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:53:14.077604 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:53:14.077618 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:53:14.078027 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 12 20:53:14.078048 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 12 20:53:14.078063 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 12 20:53:14.078078 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 12 20:53:14.078093 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:53:14.078107 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:53:14.078122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:53:14.078137 kernel: landlock: Up and running. Nov 12 20:53:14.078151 kernel: SELinux: Initializing. Nov 12 20:53:14.078166 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.078181 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.078196 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:53:14.078216 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:14.078231 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:14.078246 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:14.078261 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:53:14.078276 kernel: signal: max sigframe size: 3632 Nov 12 20:53:14.078291 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:53:14.078306 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:53:14.078320 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:53:14.078335 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:53:14.078353 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:53:14.078367 kernel: .... node #0, CPUs: #1 Nov 12 20:53:14.078382 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 12 20:53:14.078398 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:53:14.078413 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:53:14.078427 kernel: smpboot: Max logical packages: 1 Nov 12 20:53:14.078442 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Nov 12 20:53:14.078457 kernel: devtmpfs: initialized Nov 12 20:53:14.078474 kernel: x86/mm: Memory block size: 128MB Nov 12 20:53:14.078489 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 12 20:53:14.078505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:53:14.078520 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:53:14.078535 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:53:14.078550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:53:14.078564 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:53:14.078579 kernel: audit: type=2000 audit(1731444793.027:1): state=initialized audit_enabled=0 res=1 Nov 12 20:53:14.078594 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:53:14.078611 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:53:14.078626 kernel: cpuidle: using governor menu Nov 12 20:53:14.078651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:53:14.078666 kernel: dca service started, version 1.12.1 Nov 12 20:53:14.078681 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 12 20:53:14.078695 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:53:14.078710 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:53:14.078725 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:53:14.078739 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:53:14.078757 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:53:14.078772 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:53:14.078786 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:53:14.078801 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:53:14.078816 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:53:14.078831 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:53:14.078845 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:53:14.078860 kernel: ACPI: Interpreter enabled Nov 12 20:53:14.078875 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:53:14.078892 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:53:14.078907 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:53:14.078922 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:53:14.078937 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 12 20:53:14.078951 kernel: iommu: Default domain type: Translated Nov 12 20:53:14.078966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:53:14.078980 kernel: efivars: Registered efivars operations Nov 12 20:53:14.078995 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:53:14.079010 kernel: PCI: System does not support PCI Nov 12 20:53:14.079026 kernel: vgaarb: loaded Nov 12 20:53:14.079039 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 12 20:53:14.079053 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:53:14.079068 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:53:14.079082 kernel: pnp: PnP ACPI init Nov 12 20:53:14.079096 kernel: pnp: PnP ACPI: found 3 devices Nov 12 20:53:14.079110 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:53:14.079124 kernel: NET: Registered PF_INET protocol family Nov 12 20:53:14.079140 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:53:14.079158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:53:14.079172 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:53:14.079186 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:53:14.079199 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:53:14.079213 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:53:14.079227 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.079241 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.079254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:53:14.079270 kernel: NET: Registered PF_XDP protocol family Nov 12 20:53:14.079286 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:53:14.079301 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:53:14.079315 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Nov 12 20:53:14.079328 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:53:14.079342 kernel: Initialise system trusted keyrings Nov 12 20:53:14.079355 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:53:14.079369 kernel: Key type asymmetric registered Nov 12 20:53:14.079383 kernel: Asymmetric key parser 'x509' registered Nov 12 20:53:14.079398 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:53:14.079417 kernel: io scheduler mq-deadline registered Nov 12 20:53:14.079429 kernel: io scheduler kyber registered Nov 12 20:53:14.079444 kernel: io scheduler bfq registered Nov 12 20:53:14.079458 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:53:14.079473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:53:14.079488 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:53:14.079503 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:53:14.079518 kernel: i8042: PNP: No PS/2 controller found. Nov 12 20:53:14.079756 kernel: rtc_cmos 00:02: registered as rtc0 Nov 12 20:53:14.079882 kernel: rtc_cmos 00:02: setting system clock to 2024-11-12T20:53:13 UTC (1731444793) Nov 12 20:53:14.079988 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 12 20:53:14.080006 kernel: intel_pstate: CPU model not supported Nov 12 20:53:14.080021 kernel: efifb: probing for efifb Nov 12 20:53:14.080035 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 12 20:53:14.080049 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 12 20:53:14.080064 kernel: efifb: scrolling: redraw Nov 12 20:53:14.080081 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 20:53:14.080095 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:53:14.080110 kernel: fb0: EFI VGA frame buffer device Nov 12 20:53:14.080124 kernel: pstore: Using crash dump compression: deflate Nov 12 20:53:14.080138 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:53:14.080153 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:53:14.080167 kernel: Segment Routing with IPv6 Nov 12 20:53:14.080181 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:53:14.080195 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:53:14.080209 kernel: Key type dns_resolver registered Nov 12 20:53:14.080226 kernel: IPI shorthand broadcast: enabled Nov 12 20:53:14.080240 kernel: sched_clock: Marking stable (845003800, 41748200)->(1079105600, -192353600) Nov 12 20:53:14.080254 kernel: registered taskstats version 1 Nov 12 20:53:14.080268 kernel: Loading compiled-in X.509 certificates Nov 12 20:53:14.080283 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:53:14.080296 kernel: Key type .fscrypt registered Nov 12 20:53:14.080310 kernel: Key type fscrypt-provisioning registered Nov 12 20:53:14.080324 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:53:14.080340 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:53:14.080354 kernel: ima: No architecture policies found Nov 12 20:53:14.080370 kernel: clk: Disabling unused clocks Nov 12 20:53:14.080384 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:53:14.080399 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:53:14.080413 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:53:14.080426 kernel: Run /init as init process Nov 12 20:53:14.080439 kernel: with arguments: Nov 12 20:53:14.080453 kernel: /init Nov 12 20:53:14.080465 kernel: with environment: Nov 12 20:53:14.080480 kernel: HOME=/ Nov 12 20:53:14.080492 kernel: TERM=linux Nov 12 20:53:14.080504 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:53:14.080519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:14.080534 systemd[1]: Detected virtualization microsoft. Nov 12 20:53:14.080549 systemd[1]: Detected architecture x86-64. Nov 12 20:53:14.080562 systemd[1]: Running in initrd. Nov 12 20:53:14.080578 systemd[1]: No hostname configured, using default hostname. Nov 12 20:53:14.080592 systemd[1]: Hostname set to . Nov 12 20:53:14.080606 systemd[1]: Initializing machine ID from random generator. Nov 12 20:53:14.080620 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:53:14.080661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:14.080677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:14.080692 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:53:14.080707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:14.080731 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:53:14.080746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:53:14.080763 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:53:14.080779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:53:14.080794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:14.080809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:14.080824 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:14.080842 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:14.080857 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:14.080872 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:14.080887 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:14.080902 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:14.080917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:14.080933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:14.080948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:14.080962 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:14.080981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:14.080997 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:14.081012 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:53:14.081027 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:14.081043 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:53:14.081059 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:53:14.081075 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:14.081090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:14.087332 systemd-journald[176]: Collecting audit messages is disabled. Nov 12 20:53:14.087369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:14.087382 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:14.087392 systemd-journald[176]: Journal started Nov 12 20:53:14.087421 systemd-journald[176]: Runtime Journal (/run/log/journal/a5d6608346314ff2a3eb74237cb1d4cc) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:53:14.092651 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:14.092781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:14.093820 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:53:14.106426 systemd-modules-load[177]: Inserted module 'overlay' Nov 12 20:53:14.111038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:14.123812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:14.130041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:14.137942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:14.150717 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:53:14.153382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:14.160821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:14.168121 kernel: Bridge firewalling registered Nov 12 20:53:14.170907 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 12 20:53:14.172418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:14.181469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:14.189808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:14.203938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:14.210117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:14.220825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:53:14.227001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:14.232336 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:14.251857 dracut-cmdline[212]: dracut-dracut-053 Nov 12 20:53:14.254857 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:14.291368 systemd-resolved[214]: Positive Trust Anchors: Nov 12 20:53:14.291383 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:14.291437 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:14.314967 systemd-resolved[214]: Defaulting to hostname 'linux'. Nov 12 20:53:14.318048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:14.323093 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:14.339657 kernel: SCSI subsystem initialized Nov 12 20:53:14.349653 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:53:14.360661 kernel: iscsi: registered transport (tcp) Nov 12 20:53:14.381239 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:53:14.381315 kernel: QLogic iSCSI HBA Driver Nov 12 20:53:14.417422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:14.424871 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:53:14.454799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:53:14.454873 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:53:14.457963 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:53:14.497662 kernel: raid6: avx512x4 gen() 18254 MB/s Nov 12 20:53:14.516654 kernel: raid6: avx512x2 gen() 18307 MB/s Nov 12 20:53:14.534655 kernel: raid6: avx512x1 gen() 18366 MB/s Nov 12 20:53:14.552649 kernel: raid6: avx2x4 gen() 18329 MB/s Nov 12 20:53:14.571653 kernel: raid6: avx2x2 gen() 18278 MB/s Nov 12 20:53:14.591829 kernel: raid6: avx2x1 gen() 13980 MB/s Nov 12 20:53:14.591880 kernel: raid6: using algorithm avx512x1 gen() 18366 MB/s Nov 12 20:53:14.612332 kernel: raid6: .... xor() 25994 MB/s, rmw enabled Nov 12 20:53:14.612369 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:53:14.634663 kernel: xor: automatically using best checksumming function avx Nov 12 20:53:14.782669 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:53:14.792320 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:14.800795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:14.814126 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 12 20:53:14.818574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:14.833787 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:53:14.845345 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 12 20:53:14.873224 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:14.887782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:14.928539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:14.941820 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:53:14.965518 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:14.969609 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:14.977754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:14.980870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:14.999947 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:53:15.018654 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:53:15.021898 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:15.045654 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:53:15.048389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:15.048697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:15.059811 kernel: AES CTR mode by8 optimization enabled Nov 12 20:53:15.059751 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:15.062707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:15.062853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:15.074401 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:15.087758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:15.094546 kernel: hv_vmbus: Vmbus version:5.2 Nov 12 20:53:15.099411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:15.099828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:15.115899 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 12 20:53:15.115823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:15.130651 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 12 20:53:15.130687 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 20:53:15.145676 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 20:53:15.145748 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 20:53:15.148375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:15.155703 kernel: hv_vmbus: registering driver hv_netvsc Nov 12 20:53:15.158653 kernel: hv_vmbus: registering driver hid_hyperv Nov 12 20:53:15.164764 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 12 20:53:15.164967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:15.174439 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 12 20:53:15.196469 kernel: hv_vmbus: registering driver hv_storvsc Nov 12 20:53:15.196528 kernel: PTP clock support registered Nov 12 20:53:15.201285 kernel: scsi host1: storvsc_host_t Nov 12 20:53:15.201365 kernel: scsi host0: storvsc_host_t Nov 12 20:53:15.205618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:15.214651 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 12 20:53:15.214703 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 12 20:53:15.225198 kernel: hv_utils: Registering HyperV Utility Driver Nov 12 20:53:15.225264 kernel: hv_vmbus: registering driver hv_utils Nov 12 20:53:15.227613 kernel: hv_utils: Heartbeat IC version 3.0 Nov 12 20:53:15.230301 kernel: hv_utils: Shutdown IC version 3.2 Nov 12 20:53:15.230330 kernel: hv_utils: TimeSync IC version 4.0 Nov 12 20:53:16.406167 systemd-resolved[214]: Clock change detected. Flushing caches. Nov 12 20:53:16.420143 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 12 20:53:16.422553 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:53:16.422574 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 12 20:53:16.433381 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 12 20:53:16.447082 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 12 20:53:16.447269 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:53:16.447427 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 12 20:53:16.447579 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 12 20:53:16.447744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:16.447767 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:53:16.540205 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: VF slot 1 added Nov 12 20:53:16.551270 kernel: hv_vmbus: registering driver hv_pci Nov 12 20:53:16.551329 kernel: hv_pci 1b7a5e33-e495-4cef-98e9-a7187a3283cc: PCI VMBus probing: Using version 0x10004 Nov 12 20:53:16.593554 kernel: hv_pci 1b7a5e33-e495-4cef-98e9-a7187a3283cc: PCI host bridge to bus e495:00 Nov 12 20:53:16.593749 kernel: pci_bus e495:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 12 20:53:16.593926 kernel: pci_bus e495:00: No busn resource found for root bus, will use [bus 00-ff] Nov 12 20:53:16.594327 kernel: pci e495:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 12 20:53:16.594536 kernel: pci e495:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:16.594703 kernel: pci e495:00:02.0: enabling Extended Tags Nov 12 20:53:16.594887 kernel: pci e495:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e495:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 12 20:53:16.595079 kernel: pci_bus e495:00: busn_res: [bus 00-ff] end is updated to 00 Nov 12 20:53:16.595308 kernel: pci e495:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:16.759052 kernel: mlx5_core e495:00:02.0: enabling device (0000 -> 0002) Nov 12 20:53:17.008491 kernel: mlx5_core e495:00:02.0: firmware version: 14.30.1284 Nov 12 20:53:17.008702 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Nov 12 20:53:17.008725 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (444) Nov 12 20:53:17.008744 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: VF registering: eth1 Nov 12 20:53:17.008894 kernel: mlx5_core e495:00:02.0 eth1: joined to eth0 Nov 12 20:53:17.009086 kernel: mlx5_core e495:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 20:53:16.873782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 12 20:53:16.968980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:53:17.017385 kernel: mlx5_core e495:00:02.0 enP58517s1: renamed from eth1 Nov 12 20:53:16.979843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 12 20:53:16.983365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 12 20:53:17.000230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 12 20:53:17.028134 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:53:17.044981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:17.051980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:18.060058 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:18.060500 disk-uuid[602]: The operation has completed successfully. Nov 12 20:53:18.129583 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:18.129709 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:18.169135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:18.174941 sh[688]: Success Nov 12 20:53:18.205494 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:53:18.400592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:18.413177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:18.419452 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:18.443976 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:18.444027 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:18.449146 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:18.451741 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:18.454197 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:18.665446 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:18.670632 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:18.679121 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:18.687940 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:18.705138 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:18.705195 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:18.707644 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:18.728487 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:18.736714 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:18.741197 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:18.745158 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:18.757181 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:18.777701 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:18.789154 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:18.810430 systemd-networkd[872]: lo: Link UP Nov 12 20:53:18.810439 systemd-networkd[872]: lo: Gained carrier Nov 12 20:53:18.813113 systemd-networkd[872]: Enumeration completed Nov 12 20:53:18.813575 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:18.817262 systemd[1]: Reached target network.target - Network. Nov 12 20:53:18.817422 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:18.817425 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:18.884981 kernel: mlx5_core e495:00:02.0 enP58517s1: Link up Nov 12 20:53:18.918983 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: Data path switched to VF: enP58517s1 Nov 12 20:53:18.919155 systemd-networkd[872]: enP58517s1: Link UP Nov 12 20:53:18.919322 systemd-networkd[872]: eth0: Link UP Nov 12 20:53:18.921336 systemd-networkd[872]: eth0: Gained carrier Nov 12 20:53:18.921353 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:18.930334 systemd-networkd[872]: enP58517s1: Gained carrier Nov 12 20:53:18.966002 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:53:19.551731 ignition[837]: Ignition 2.19.0 Nov 12 20:53:19.551746 ignition[837]: Stage: fetch-offline Nov 12 20:53:19.551790 ignition[837]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.559017 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:19.551802 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.552020 ignition[837]: parsed url from cmdline: "" Nov 12 20:53:19.552025 ignition[837]: no config URL provided Nov 12 20:53:19.552033 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:19.582274 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:53:19.552045 ignition[837]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:19.552053 ignition[837]: failed to fetch config: resource requires networking Nov 12 20:53:19.552309 ignition[837]: Ignition finished successfully Nov 12 20:53:19.601430 ignition[880]: Ignition 2.19.0 Nov 12 20:53:19.601445 ignition[880]: Stage: fetch Nov 12 20:53:19.601665 ignition[880]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.601678 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.601783 ignition[880]: parsed url from cmdline: "" Nov 12 20:53:19.601788 ignition[880]: no config URL provided Nov 12 20:53:19.601793 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:19.601802 ignition[880]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:19.601824 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 12 20:53:19.689586 ignition[880]: GET result: OK Nov 12 20:53:19.689691 ignition[880]: config has been read from IMDS userdata Nov 12 20:53:19.689721 ignition[880]: parsing config with SHA512: 98f8c1081d68f66cb1ea19b988e5bb1078126596ba85e9483bec1120930dfe92742d2a6da82f7c9bcac72c8fb7e6616f5eaeb722738002313148a92176ec3523 Nov 12 20:53:19.697206 unknown[880]: fetched base config from "system" Nov 12 20:53:19.699575 unknown[880]: fetched base config from "system" Nov 12 20:53:19.699588 unknown[880]: fetched user config from "azure" Nov 12 20:53:19.700119 ignition[880]: fetch: fetch complete Nov 12 20:53:19.701762 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:53:19.700125 ignition[880]: fetch: fetch passed Nov 12 20:53:19.700174 ignition[880]: Ignition finished successfully Nov 12 20:53:19.715006 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:19.729498 ignition[887]: Ignition 2.19.0 Nov 12 20:53:19.729508 ignition[887]: Stage: kargs Nov 12 20:53:19.729725 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.729736 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.731032 ignition[887]: kargs: kargs passed Nov 12 20:53:19.731077 ignition[887]: Ignition finished successfully Nov 12 20:53:19.739220 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:19.750156 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:19.765281 ignition[893]: Ignition 2.19.0 Nov 12 20:53:19.765291 ignition[893]: Stage: disks Nov 12 20:53:19.767221 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:19.765520 ignition[893]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.771493 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:19.765529 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.766369 ignition[893]: disks: disks passed Nov 12 20:53:19.766418 ignition[893]: Ignition finished successfully Nov 12 20:53:19.787864 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:19.790397 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:19.790433 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:19.790773 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:19.807378 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:19.862213 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 12 20:53:19.867648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:19.884072 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:19.971361 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:19.971972 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:19.974545 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:20.014113 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:20.018849 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:20.026110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:53:20.046339 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Nov 12 20:53:20.046364 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:20.046379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:20.046394 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:20.031405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:20.054768 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:20.031440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:20.036133 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:20.056168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:20.069163 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:20.416193 systemd-networkd[872]: enP58517s1: Gained IPv6LL Nov 12 20:53:20.575539 coreos-metadata[914]: Nov 12 20:53:20.575 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:53:20.582066 coreos-metadata[914]: Nov 12 20:53:20.578 INFO Fetch successful Nov 12 20:53:20.582066 coreos-metadata[914]: Nov 12 20:53:20.578 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:53:20.589382 coreos-metadata[914]: Nov 12 20:53:20.588 INFO Fetch successful Nov 12 20:53:20.601432 coreos-metadata[914]: Nov 12 20:53:20.601 INFO wrote hostname ci-4081.2.0-a-c73ec1ae7a to /sysroot/etc/hostname Nov 12 20:53:20.603095 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:20.655221 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:20.688871 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:20.708352 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:20.713231 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:20.864228 systemd-networkd[872]: eth0: Gained IPv6LL Nov 12 20:53:21.597670 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:21.606163 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:21.620444 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:21.617469 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:21.626832 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:21.650776 ignition[1030]: INFO : Ignition 2.19.0 Nov 12 20:53:21.650776 ignition[1030]: INFO : Stage: mount Nov 12 20:53:21.651261 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:21.660113 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:21.660113 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:21.660113 ignition[1030]: INFO : mount: mount passed Nov 12 20:53:21.660113 ignition[1030]: INFO : Ignition finished successfully Nov 12 20:53:21.660772 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:21.679041 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:21.687352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:21.705509 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1042) Nov 12 20:53:21.705559 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:21.706974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:21.710795 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:21.716126 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:21.717537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:21.749895 ignition[1058]: INFO : Ignition 2.19.0 Nov 12 20:53:21.749895 ignition[1058]: INFO : Stage: files Nov 12 20:53:21.754545 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:21.754545 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:21.754545 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:21.789305 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:21.789305 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:21.850023 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:21.853846 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:21.853846 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:21.850535 unknown[1058]: wrote ssh authorized keys file for user: core Nov 12 20:53:21.871931 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:21.878629 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:21.927237 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:53:22.063047 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:22.069008 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:22.073388 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:22.073388 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:22.081993 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 20:53:22.627693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:53:23.068514 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:23.068514 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:53:23.083787 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: files passed Nov 12 20:53:23.088966 ignition[1058]: INFO : Ignition finished successfully Nov 12 20:53:23.100663 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:23.122439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:23.133691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:23.140396 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:23.147223 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:23.160983 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:23.160983 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:23.168517 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:23.173235 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:23.176775 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:23.191099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:23.217923 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:23.218061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:23.226781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:23.231761 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:23.236604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:23.247204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:23.261163 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:23.271122 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:23.283864 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:23.289506 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:23.295172 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:23.295344 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:23.295470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:23.296598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:23.297080 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:23.297448 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:23.297854 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:23.298291 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:23.298995 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:23.299378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:23.299796 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:23.300221 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:23.389313 ignition[1111]: INFO : Ignition 2.19.0 Nov 12 20:53:23.389313 ignition[1111]: INFO : Stage: umount Nov 12 20:53:23.389313 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:23.389313 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:23.389313 ignition[1111]: INFO : umount: umount passed Nov 12 20:53:23.389313 ignition[1111]: INFO : Ignition finished successfully Nov 12 20:53:23.300657 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:23.301477 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:23.301616 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:23.302345 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:23.302738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:23.303143 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:23.335446 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:23.338469 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:23.338633 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:23.339796 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:23.339938 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:23.340748 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:23.340879 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:23.341145 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:53:23.341265 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:23.359702 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:23.368115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:23.370291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:23.373116 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:23.381317 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:23.381500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:23.395161 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:23.395271 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:23.400126 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:23.400395 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:23.407139 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:23.407192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:23.411306 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:53:23.413404 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:53:23.490048 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:23.492182 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:23.492264 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:23.502502 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:23.502622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:23.504825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:23.509788 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:23.512232 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:23.518062 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:23.522179 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:23.524777 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:23.524832 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:23.529851 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:23.529922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:23.534895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:23.534946 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:23.540129 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:23.549369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:23.552013 systemd-networkd[872]: eth0: DHCPv6 lease lost Nov 12 20:53:23.554051 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:23.554845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:23.554933 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:23.564794 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:23.564898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:23.570058 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:23.570165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:23.576903 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:23.577052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:23.592726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:23.592815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:23.597149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:23.597229 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:23.620079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:23.624496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:23.624568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:23.632730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:23.632788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:23.639836 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:23.639894 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:23.651101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:23.651168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:23.656861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:23.673615 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:23.673782 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:23.679254 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:23.679300 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:23.681987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:23.682024 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:23.682333 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:23.682373 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:23.683166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:23.683201 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:23.721082 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: Data path switched from VF: enP58517s1 Nov 12 20:53:23.686189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:23.686233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:23.714659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:23.723608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:23.723684 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:23.723794 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:53:23.723829 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:23.726751 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:23.726795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:23.727497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:23.727534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:23.728453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:23.728536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:23.780838 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:23.780980 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:23.785624 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:23.796125 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:23.805716 systemd[1]: Switching root. Nov 12 20:53:23.871671 systemd-journald[176]: Journal stopped Nov 12 20:53:14.074484 kernel: Linux version 6.6.60-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Tue Nov 12 16:20:46 -00 2024 Nov 12 20:53:14.074520 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:14.074535 kernel: BIOS-provided physical RAM map: Nov 12 20:53:14.074547 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Nov 12 20:53:14.074558 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Nov 12 20:53:14.074570 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Nov 12 20:53:14.074584 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Nov 12 20:53:14.074599 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Nov 12 20:53:14.074611 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Nov 12 20:53:14.074622 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Nov 12 20:53:14.075254 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Nov 12 20:53:14.075277 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Nov 12 20:53:14.075289 kernel: printk: bootconsole [earlyser0] enabled Nov 12 20:53:14.075302 kernel: NX (Execute Disable) protection: active Nov 12 20:53:14.075321 kernel: APIC: Static calls initialized Nov 12 20:53:14.075333 kernel: efi: EFI v2.7 by Microsoft Nov 12 20:53:14.075346 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee73a98 Nov 12 20:53:14.075358 kernel: SMBIOS 3.1.0 present. Nov 12 20:53:14.075370 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Nov 12 20:53:14.075382 kernel: Hypervisor detected: Microsoft Hyper-V Nov 12 20:53:14.075393 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Nov 12 20:53:14.075407 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Nov 12 20:53:14.075421 kernel: Hyper-V: Nested features: 0x1e0101 Nov 12 20:53:14.075437 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Nov 12 20:53:14.075457 kernel: Hyper-V: Using hypercall for remote TLB flush Nov 12 20:53:14.075472 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:14.075486 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Nov 12 20:53:14.075498 kernel: tsc: Marking TSC unstable due to running on Hyper-V Nov 12 20:53:14.075511 kernel: tsc: Detected 2593.904 MHz processor Nov 12 20:53:14.075525 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Nov 12 20:53:14.075538 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Nov 12 20:53:14.075551 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Nov 12 20:53:14.075564 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Nov 12 20:53:14.075580 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Nov 12 20:53:14.075593 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Nov 12 20:53:14.075606 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Nov 12 20:53:14.075619 kernel: Using GB pages for direct mapping Nov 12 20:53:14.075631 kernel: Secure boot disabled Nov 12 20:53:14.075659 kernel: ACPI: Early table checksum verification disabled Nov 12 20:53:14.075672 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Nov 12 20:53:14.075691 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075708 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075722 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Nov 12 20:53:14.075735 kernel: ACPI: FACS 0x000000003FFFE000 000040 Nov 12 20:53:14.075749 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075763 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075777 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075794 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075808 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075822 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075835 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Nov 12 20:53:14.075849 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Nov 12 20:53:14.075863 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Nov 12 20:53:14.075877 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Nov 12 20:53:14.075891 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Nov 12 20:53:14.075907 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Nov 12 20:53:14.075921 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Nov 12 20:53:14.075935 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Nov 12 20:53:14.075949 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Nov 12 20:53:14.075963 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Nov 12 20:53:14.075977 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Nov 12 20:53:14.075990 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Nov 12 20:53:14.076004 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Nov 12 20:53:14.076018 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Nov 12 20:53:14.076034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Nov 12 20:53:14.076048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Nov 12 20:53:14.076062 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Nov 12 20:53:14.076076 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Nov 12 20:53:14.076090 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Nov 12 20:53:14.076104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Nov 12 20:53:14.076118 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Nov 12 20:53:14.076132 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Nov 12 20:53:14.076146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Nov 12 20:53:14.076162 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Nov 12 20:53:14.076176 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Nov 12 20:53:14.076190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Nov 12 20:53:14.076204 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Nov 12 20:53:14.076218 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Nov 12 20:53:14.076231 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Nov 12 20:53:14.076245 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Nov 12 20:53:14.076259 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Nov 12 20:53:14.076273 kernel: Zone ranges: Nov 12 20:53:14.076290 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Nov 12 20:53:14.076304 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Nov 12 20:53:14.076317 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:14.076331 kernel: Movable zone start for each node Nov 12 20:53:14.076345 kernel: Early memory node ranges Nov 12 20:53:14.076358 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Nov 12 20:53:14.076372 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Nov 12 20:53:14.076386 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Nov 12 20:53:14.076399 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Nov 12 20:53:14.076416 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Nov 12 20:53:14.076430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Nov 12 20:53:14.076443 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Nov 12 20:53:14.076457 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Nov 12 20:53:14.076471 kernel: ACPI: PM-Timer IO Port: 0x408 Nov 12 20:53:14.076485 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Nov 12 20:53:14.076498 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Nov 12 20:53:14.076512 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Nov 12 20:53:14.076526 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Nov 12 20:53:14.076543 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Nov 12 20:53:14.076556 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Nov 12 20:53:14.076571 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Nov 12 20:53:14.076584 kernel: Booting paravirtualized kernel on Hyper-V Nov 12 20:53:14.076599 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Nov 12 20:53:14.076613 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Nov 12 20:53:14.076627 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Nov 12 20:53:14.076648 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Nov 12 20:53:14.076662 kernel: pcpu-alloc: [0] 0 1 Nov 12 20:53:14.076679 kernel: Hyper-V: PV spinlocks enabled Nov 12 20:53:14.076692 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Nov 12 20:53:14.076708 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:14.076723 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Nov 12 20:53:14.076736 kernel: random: crng init done Nov 12 20:53:14.076750 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Nov 12 20:53:14.076764 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Nov 12 20:53:14.076778 kernel: Fallback order for Node 0: 0 Nov 12 20:53:14.076795 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Nov 12 20:53:14.076823 kernel: Policy zone: Normal Nov 12 20:53:14.076837 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Nov 12 20:53:14.076854 kernel: software IO TLB: area num 2. Nov 12 20:53:14.076869 kernel: Memory: 8077076K/8387460K available (12288K kernel code, 2305K rwdata, 22724K rodata, 42828K init, 2360K bss, 310124K reserved, 0K cma-reserved) Nov 12 20:53:14.076884 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Nov 12 20:53:14.076898 kernel: ftrace: allocating 37799 entries in 148 pages Nov 12 20:53:14.076913 kernel: ftrace: allocated 148 pages with 3 groups Nov 12 20:53:14.076928 kernel: Dynamic Preempt: voluntary Nov 12 20:53:14.076942 kernel: rcu: Preemptible hierarchical RCU implementation. Nov 12 20:53:14.076958 kernel: rcu: RCU event tracing is enabled. Nov 12 20:53:14.076976 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Nov 12 20:53:14.076991 kernel: Trampoline variant of Tasks RCU enabled. Nov 12 20:53:14.077006 kernel: Rude variant of Tasks RCU enabled. Nov 12 20:53:14.077020 kernel: Tracing variant of Tasks RCU enabled. Nov 12 20:53:14.077035 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Nov 12 20:53:14.077053 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Nov 12 20:53:14.077068 kernel: Using NULL legacy PIC Nov 12 20:53:14.077082 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Nov 12 20:53:14.077097 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Nov 12 20:53:14.077112 kernel: Console: colour dummy device 80x25 Nov 12 20:53:14.077126 kernel: printk: console [tty1] enabled Nov 12 20:53:14.077141 kernel: printk: console [ttyS0] enabled Nov 12 20:53:14.077155 kernel: printk: bootconsole [earlyser0] disabled Nov 12 20:53:14.077170 kernel: ACPI: Core revision 20230628 Nov 12 20:53:14.077185 kernel: Failed to register legacy timer interrupt Nov 12 20:53:14.077202 kernel: APIC: Switch to symmetric I/O mode setup Nov 12 20:53:14.077216 kernel: Hyper-V: enabling crash_kexec_post_notifiers Nov 12 20:53:14.077231 kernel: Hyper-V: Using IPI hypercalls Nov 12 20:53:14.077246 kernel: APIC: send_IPI() replaced with hv_send_ipi() Nov 12 20:53:14.077260 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Nov 12 20:53:14.077275 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Nov 12 20:53:14.077290 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Nov 12 20:53:14.077305 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Nov 12 20:53:14.077320 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Nov 12 20:53:14.077337 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.80 BogoMIPS (lpj=2593904) Nov 12 20:53:14.077352 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Nov 12 20:53:14.077367 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Nov 12 20:53:14.077382 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Nov 12 20:53:14.077396 kernel: Spectre V2 : Mitigation: Retpolines Nov 12 20:53:14.077411 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Nov 12 20:53:14.077425 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Nov 12 20:53:14.077440 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Nov 12 20:53:14.077455 kernel: RETBleed: Vulnerable Nov 12 20:53:14.077472 kernel: Speculative Store Bypass: Vulnerable Nov 12 20:53:14.077487 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:14.077502 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Nov 12 20:53:14.077516 kernel: GDS: Unknown: Dependent on hypervisor status Nov 12 20:53:14.077531 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Nov 12 20:53:14.077545 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Nov 12 20:53:14.077560 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Nov 12 20:53:14.077575 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Nov 12 20:53:14.077589 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Nov 12 20:53:14.077604 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Nov 12 20:53:14.077618 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Nov 12 20:53:14.078027 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Nov 12 20:53:14.078048 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Nov 12 20:53:14.078063 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Nov 12 20:53:14.078078 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Nov 12 20:53:14.078093 kernel: Freeing SMP alternatives memory: 32K Nov 12 20:53:14.078107 kernel: pid_max: default: 32768 minimum: 301 Nov 12 20:53:14.078122 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Nov 12 20:53:14.078137 kernel: landlock: Up and running. Nov 12 20:53:14.078151 kernel: SELinux: Initializing. Nov 12 20:53:14.078166 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.078181 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.078196 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Nov 12 20:53:14.078216 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:14.078231 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:14.078246 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Nov 12 20:53:14.078261 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Nov 12 20:53:14.078276 kernel: signal: max sigframe size: 3632 Nov 12 20:53:14.078291 kernel: rcu: Hierarchical SRCU implementation. Nov 12 20:53:14.078306 kernel: rcu: Max phase no-delay instances is 400. Nov 12 20:53:14.078320 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Nov 12 20:53:14.078335 kernel: smp: Bringing up secondary CPUs ... Nov 12 20:53:14.078353 kernel: smpboot: x86: Booting SMP configuration: Nov 12 20:53:14.078367 kernel: .... node #0, CPUs: #1 Nov 12 20:53:14.078382 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Nov 12 20:53:14.078398 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Nov 12 20:53:14.078413 kernel: smp: Brought up 1 node, 2 CPUs Nov 12 20:53:14.078427 kernel: smpboot: Max logical packages: 1 Nov 12 20:53:14.078442 kernel: smpboot: Total of 2 processors activated (10375.61 BogoMIPS) Nov 12 20:53:14.078457 kernel: devtmpfs: initialized Nov 12 20:53:14.078474 kernel: x86/mm: Memory block size: 128MB Nov 12 20:53:14.078489 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Nov 12 20:53:14.078505 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Nov 12 20:53:14.078520 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Nov 12 20:53:14.078535 kernel: pinctrl core: initialized pinctrl subsystem Nov 12 20:53:14.078550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Nov 12 20:53:14.078564 kernel: audit: initializing netlink subsys (disabled) Nov 12 20:53:14.078579 kernel: audit: type=2000 audit(1731444793.027:1): state=initialized audit_enabled=0 res=1 Nov 12 20:53:14.078594 kernel: thermal_sys: Registered thermal governor 'step_wise' Nov 12 20:53:14.078611 kernel: thermal_sys: Registered thermal governor 'user_space' Nov 12 20:53:14.078626 kernel: cpuidle: using governor menu Nov 12 20:53:14.078651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Nov 12 20:53:14.078666 kernel: dca service started, version 1.12.1 Nov 12 20:53:14.078681 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Nov 12 20:53:14.078695 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Nov 12 20:53:14.078710 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Nov 12 20:53:14.078725 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Nov 12 20:53:14.078739 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Nov 12 20:53:14.078757 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Nov 12 20:53:14.078772 kernel: ACPI: Added _OSI(Module Device) Nov 12 20:53:14.078786 kernel: ACPI: Added _OSI(Processor Device) Nov 12 20:53:14.078801 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Nov 12 20:53:14.078816 kernel: ACPI: Added _OSI(Processor Aggregator Device) Nov 12 20:53:14.078831 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Nov 12 20:53:14.078845 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Nov 12 20:53:14.078860 kernel: ACPI: Interpreter enabled Nov 12 20:53:14.078875 kernel: ACPI: PM: (supports S0 S5) Nov 12 20:53:14.078892 kernel: ACPI: Using IOAPIC for interrupt routing Nov 12 20:53:14.078907 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Nov 12 20:53:14.078922 kernel: PCI: Ignoring E820 reservations for host bridge windows Nov 12 20:53:14.078937 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Nov 12 20:53:14.078951 kernel: iommu: Default domain type: Translated Nov 12 20:53:14.078966 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Nov 12 20:53:14.078980 kernel: efivars: Registered efivars operations Nov 12 20:53:14.078995 kernel: PCI: Using ACPI for IRQ routing Nov 12 20:53:14.079010 kernel: PCI: System does not support PCI Nov 12 20:53:14.079026 kernel: vgaarb: loaded Nov 12 20:53:14.079039 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Nov 12 20:53:14.079053 kernel: VFS: Disk quotas dquot_6.6.0 Nov 12 20:53:14.079068 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Nov 12 20:53:14.079082 kernel: pnp: PnP ACPI init Nov 12 20:53:14.079096 kernel: pnp: PnP ACPI: found 3 devices Nov 12 20:53:14.079110 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Nov 12 20:53:14.079124 kernel: NET: Registered PF_INET protocol family Nov 12 20:53:14.079140 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Nov 12 20:53:14.079158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Nov 12 20:53:14.079172 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Nov 12 20:53:14.079186 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Nov 12 20:53:14.079199 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Nov 12 20:53:14.079213 kernel: TCP: Hash tables configured (established 65536 bind 65536) Nov 12 20:53:14.079227 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.079241 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Nov 12 20:53:14.079254 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Nov 12 20:53:14.079270 kernel: NET: Registered PF_XDP protocol family Nov 12 20:53:14.079286 kernel: PCI: CLS 0 bytes, default 64 Nov 12 20:53:14.079301 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Nov 12 20:53:14.079315 kernel: software IO TLB: mapped [mem 0x000000003ae73000-0x000000003ee73000] (64MB) Nov 12 20:53:14.079328 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Nov 12 20:53:14.079342 kernel: Initialise system trusted keyrings Nov 12 20:53:14.079355 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Nov 12 20:53:14.079369 kernel: Key type asymmetric registered Nov 12 20:53:14.079383 kernel: Asymmetric key parser 'x509' registered Nov 12 20:53:14.079398 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Nov 12 20:53:14.079417 kernel: io scheduler mq-deadline registered Nov 12 20:53:14.079429 kernel: io scheduler kyber registered Nov 12 20:53:14.079444 kernel: io scheduler bfq registered Nov 12 20:53:14.079458 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Nov 12 20:53:14.079473 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Nov 12 20:53:14.079488 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Nov 12 20:53:14.079503 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Nov 12 20:53:14.079518 kernel: i8042: PNP: No PS/2 controller found. Nov 12 20:53:14.079756 kernel: rtc_cmos 00:02: registered as rtc0 Nov 12 20:53:14.079882 kernel: rtc_cmos 00:02: setting system clock to 2024-11-12T20:53:13 UTC (1731444793) Nov 12 20:53:14.079988 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Nov 12 20:53:14.080006 kernel: intel_pstate: CPU model not supported Nov 12 20:53:14.080021 kernel: efifb: probing for efifb Nov 12 20:53:14.080035 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Nov 12 20:53:14.080049 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Nov 12 20:53:14.080064 kernel: efifb: scrolling: redraw Nov 12 20:53:14.080081 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Nov 12 20:53:14.080095 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:53:14.080110 kernel: fb0: EFI VGA frame buffer device Nov 12 20:53:14.080124 kernel: pstore: Using crash dump compression: deflate Nov 12 20:53:14.080138 kernel: pstore: Registered efi_pstore as persistent store backend Nov 12 20:53:14.080153 kernel: NET: Registered PF_INET6 protocol family Nov 12 20:53:14.080167 kernel: Segment Routing with IPv6 Nov 12 20:53:14.080181 kernel: In-situ OAM (IOAM) with IPv6 Nov 12 20:53:14.080195 kernel: NET: Registered PF_PACKET protocol family Nov 12 20:53:14.080209 kernel: Key type dns_resolver registered Nov 12 20:53:14.080226 kernel: IPI shorthand broadcast: enabled Nov 12 20:53:14.080240 kernel: sched_clock: Marking stable (845003800, 41748200)->(1079105600, -192353600) Nov 12 20:53:14.080254 kernel: registered taskstats version 1 Nov 12 20:53:14.080268 kernel: Loading compiled-in X.509 certificates Nov 12 20:53:14.080283 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.60-flatcar: 0473a73d840db5324524af106a53c13fc6fc218a' Nov 12 20:53:14.080296 kernel: Key type .fscrypt registered Nov 12 20:53:14.080310 kernel: Key type fscrypt-provisioning registered Nov 12 20:53:14.080324 kernel: ima: No TPM chip found, activating TPM-bypass! Nov 12 20:53:14.080340 kernel: ima: Allocated hash algorithm: sha1 Nov 12 20:53:14.080354 kernel: ima: No architecture policies found Nov 12 20:53:14.080370 kernel: clk: Disabling unused clocks Nov 12 20:53:14.080384 kernel: Freeing unused kernel image (initmem) memory: 42828K Nov 12 20:53:14.080399 kernel: Write protecting the kernel read-only data: 36864k Nov 12 20:53:14.080413 kernel: Freeing unused kernel image (rodata/data gap) memory: 1852K Nov 12 20:53:14.080426 kernel: Run /init as init process Nov 12 20:53:14.080439 kernel: with arguments: Nov 12 20:53:14.080453 kernel: /init Nov 12 20:53:14.080465 kernel: with environment: Nov 12 20:53:14.080480 kernel: HOME=/ Nov 12 20:53:14.080492 kernel: TERM=linux Nov 12 20:53:14.080504 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Nov 12 20:53:14.080519 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:14.080534 systemd[1]: Detected virtualization microsoft. Nov 12 20:53:14.080549 systemd[1]: Detected architecture x86-64. Nov 12 20:53:14.080562 systemd[1]: Running in initrd. Nov 12 20:53:14.080578 systemd[1]: No hostname configured, using default hostname. Nov 12 20:53:14.080592 systemd[1]: Hostname set to . Nov 12 20:53:14.080606 systemd[1]: Initializing machine ID from random generator. Nov 12 20:53:14.080620 systemd[1]: Queued start job for default target initrd.target. Nov 12 20:53:14.080661 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:14.080677 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:14.080692 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Nov 12 20:53:14.080707 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:14.080731 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Nov 12 20:53:14.080746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Nov 12 20:53:14.080763 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Nov 12 20:53:14.080779 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Nov 12 20:53:14.080794 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:14.080809 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:14.080824 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:14.080842 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:14.080857 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:14.080872 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:14.080887 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:14.080902 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:14.080917 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Nov 12 20:53:14.080933 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Nov 12 20:53:14.080948 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:14.080962 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:14.080981 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:14.080997 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:14.081012 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Nov 12 20:53:14.081027 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:14.081043 systemd[1]: Finished network-cleanup.service - Network Cleanup. Nov 12 20:53:14.081059 systemd[1]: Starting systemd-fsck-usr.service... Nov 12 20:53:14.081075 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:14.081090 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:14.087332 systemd-journald[176]: Collecting audit messages is disabled. Nov 12 20:53:14.087369 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:14.087382 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:14.087392 systemd-journald[176]: Journal started Nov 12 20:53:14.087421 systemd-journald[176]: Runtime Journal (/run/log/journal/a5d6608346314ff2a3eb74237cb1d4cc) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:53:14.092651 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:14.092781 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:14.093820 systemd[1]: Finished systemd-fsck-usr.service. Nov 12 20:53:14.106426 systemd-modules-load[177]: Inserted module 'overlay' Nov 12 20:53:14.111038 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:14.123812 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:14.130041 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:14.137942 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:14.150717 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Nov 12 20:53:14.153382 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:14.160821 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:14.168121 kernel: Bridge firewalling registered Nov 12 20:53:14.170907 systemd-modules-load[177]: Inserted module 'br_netfilter' Nov 12 20:53:14.172418 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:14.181469 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:14.189808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:14.203938 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:14.210117 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:14.220825 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Nov 12 20:53:14.227001 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:14.232336 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:14.251857 dracut-cmdline[212]: dracut-dracut-053 Nov 12 20:53:14.254857 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=c3abb3a2c1edae861df27d3f75f2daa0ffde49038bd42517f0a3aa15da59cfc7 Nov 12 20:53:14.291368 systemd-resolved[214]: Positive Trust Anchors: Nov 12 20:53:14.291383 systemd-resolved[214]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:14.291437 systemd-resolved[214]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:14.314967 systemd-resolved[214]: Defaulting to hostname 'linux'. Nov 12 20:53:14.318048 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:14.323093 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:14.339657 kernel: SCSI subsystem initialized Nov 12 20:53:14.349653 kernel: Loading iSCSI transport class v2.0-870. Nov 12 20:53:14.360661 kernel: iscsi: registered transport (tcp) Nov 12 20:53:14.381239 kernel: iscsi: registered transport (qla4xxx) Nov 12 20:53:14.381315 kernel: QLogic iSCSI HBA Driver Nov 12 20:53:14.417422 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:14.424871 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Nov 12 20:53:14.454799 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Nov 12 20:53:14.454873 kernel: device-mapper: uevent: version 1.0.3 Nov 12 20:53:14.457963 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Nov 12 20:53:14.497662 kernel: raid6: avx512x4 gen() 18254 MB/s Nov 12 20:53:14.516654 kernel: raid6: avx512x2 gen() 18307 MB/s Nov 12 20:53:14.534655 kernel: raid6: avx512x1 gen() 18366 MB/s Nov 12 20:53:14.552649 kernel: raid6: avx2x4 gen() 18329 MB/s Nov 12 20:53:14.571653 kernel: raid6: avx2x2 gen() 18278 MB/s Nov 12 20:53:14.591829 kernel: raid6: avx2x1 gen() 13980 MB/s Nov 12 20:53:14.591880 kernel: raid6: using algorithm avx512x1 gen() 18366 MB/s Nov 12 20:53:14.612332 kernel: raid6: .... xor() 25994 MB/s, rmw enabled Nov 12 20:53:14.612369 kernel: raid6: using avx512x2 recovery algorithm Nov 12 20:53:14.634663 kernel: xor: automatically using best checksumming function avx Nov 12 20:53:14.782669 kernel: Btrfs loaded, zoned=no, fsverity=no Nov 12 20:53:14.792320 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:14.800795 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:14.814126 systemd-udevd[397]: Using default interface naming scheme 'v255'. Nov 12 20:53:14.818574 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:14.833787 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Nov 12 20:53:14.845345 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Nov 12 20:53:14.873224 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:14.887782 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:14.928539 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:14.941820 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Nov 12 20:53:14.965518 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:14.969609 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:14.977754 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:14.980870 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:14.999947 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Nov 12 20:53:15.018654 kernel: cryptd: max_cpu_qlen set to 1000 Nov 12 20:53:15.021898 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:15.045654 kernel: AVX2 version of gcm_enc/dec engaged. Nov 12 20:53:15.048389 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:15.048697 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:15.059811 kernel: AES CTR mode by8 optimization enabled Nov 12 20:53:15.059751 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:15.062707 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:15.062853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:15.074401 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:15.087758 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:15.094546 kernel: hv_vmbus: Vmbus version:5.2 Nov 12 20:53:15.099411 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:15.099828 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:15.115899 kernel: hv_vmbus: registering driver hyperv_keyboard Nov 12 20:53:15.115823 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:15.130651 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Nov 12 20:53:15.130687 kernel: hid: raw HID events driver (C) Jiri Kosina Nov 12 20:53:15.145676 kernel: pps_core: LinuxPPS API ver. 1 registered Nov 12 20:53:15.145748 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Nov 12 20:53:15.148375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:15.155703 kernel: hv_vmbus: registering driver hv_netvsc Nov 12 20:53:15.158653 kernel: hv_vmbus: registering driver hid_hyperv Nov 12 20:53:15.164764 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Nov 12 20:53:15.164967 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Nov 12 20:53:15.174439 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Nov 12 20:53:15.196469 kernel: hv_vmbus: registering driver hv_storvsc Nov 12 20:53:15.196528 kernel: PTP clock support registered Nov 12 20:53:15.201285 kernel: scsi host1: storvsc_host_t Nov 12 20:53:15.201365 kernel: scsi host0: storvsc_host_t Nov 12 20:53:15.205618 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:15.214651 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Nov 12 20:53:15.214703 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Nov 12 20:53:15.225198 kernel: hv_utils: Registering HyperV Utility Driver Nov 12 20:53:15.225264 kernel: hv_vmbus: registering driver hv_utils Nov 12 20:53:15.227613 kernel: hv_utils: Heartbeat IC version 3.0 Nov 12 20:53:15.230301 kernel: hv_utils: Shutdown IC version 3.2 Nov 12 20:53:15.230330 kernel: hv_utils: TimeSync IC version 4.0 Nov 12 20:53:16.406167 systemd-resolved[214]: Clock change detected. Flushing caches. Nov 12 20:53:16.420143 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Nov 12 20:53:16.422553 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Nov 12 20:53:16.422574 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Nov 12 20:53:16.433381 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Nov 12 20:53:16.447082 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Nov 12 20:53:16.447269 kernel: sd 0:0:0:0: [sda] Write Protect is off Nov 12 20:53:16.447427 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Nov 12 20:53:16.447579 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Nov 12 20:53:16.447744 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:16.447767 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Nov 12 20:53:16.540205 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: VF slot 1 added Nov 12 20:53:16.551270 kernel: hv_vmbus: registering driver hv_pci Nov 12 20:53:16.551329 kernel: hv_pci 1b7a5e33-e495-4cef-98e9-a7187a3283cc: PCI VMBus probing: Using version 0x10004 Nov 12 20:53:16.593554 kernel: hv_pci 1b7a5e33-e495-4cef-98e9-a7187a3283cc: PCI host bridge to bus e495:00 Nov 12 20:53:16.593749 kernel: pci_bus e495:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Nov 12 20:53:16.593926 kernel: pci_bus e495:00: No busn resource found for root bus, will use [bus 00-ff] Nov 12 20:53:16.594327 kernel: pci e495:00:02.0: [15b3:1016] type 00 class 0x020000 Nov 12 20:53:16.594536 kernel: pci e495:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:16.594703 kernel: pci e495:00:02.0: enabling Extended Tags Nov 12 20:53:16.594887 kernel: pci e495:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at e495:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Nov 12 20:53:16.595079 kernel: pci_bus e495:00: busn_res: [bus 00-ff] end is updated to 00 Nov 12 20:53:16.595308 kernel: pci e495:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Nov 12 20:53:16.759052 kernel: mlx5_core e495:00:02.0: enabling device (0000 -> 0002) Nov 12 20:53:17.008491 kernel: mlx5_core e495:00:02.0: firmware version: 14.30.1284 Nov 12 20:53:17.008702 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Nov 12 20:53:17.008725 kernel: BTRFS: device fsid 9dfeafbb-8ab7-4be2-acae-f51db463fc77 devid 1 transid 37 /dev/sda3 scanned by (udev-worker) (444) Nov 12 20:53:17.008744 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: VF registering: eth1 Nov 12 20:53:17.008894 kernel: mlx5_core e495:00:02.0 eth1: joined to eth0 Nov 12 20:53:17.009086 kernel: mlx5_core e495:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Nov 12 20:53:16.873782 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Nov 12 20:53:16.968980 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:53:17.017385 kernel: mlx5_core e495:00:02.0 enP58517s1: renamed from eth1 Nov 12 20:53:16.979843 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Nov 12 20:53:16.983365 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Nov 12 20:53:17.000230 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Nov 12 20:53:17.028134 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Nov 12 20:53:17.044981 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:17.051980 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:18.060058 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Nov 12 20:53:18.060500 disk-uuid[602]: The operation has completed successfully. Nov 12 20:53:18.129583 systemd[1]: disk-uuid.service: Deactivated successfully. Nov 12 20:53:18.129709 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Nov 12 20:53:18.169135 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Nov 12 20:53:18.174941 sh[688]: Success Nov 12 20:53:18.205494 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Nov 12 20:53:18.400592 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Nov 12 20:53:18.413177 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Nov 12 20:53:18.419452 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Nov 12 20:53:18.443976 kernel: BTRFS info (device dm-0): first mount of filesystem 9dfeafbb-8ab7-4be2-acae-f51db463fc77 Nov 12 20:53:18.444027 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:18.449146 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Nov 12 20:53:18.451741 kernel: BTRFS info (device dm-0): disabling log replay at mount time Nov 12 20:53:18.454197 kernel: BTRFS info (device dm-0): using free space tree Nov 12 20:53:18.665446 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Nov 12 20:53:18.670632 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Nov 12 20:53:18.679121 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Nov 12 20:53:18.687940 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Nov 12 20:53:18.705138 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:18.705195 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:18.707644 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:18.728487 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:18.736714 systemd[1]: mnt-oem.mount: Deactivated successfully. Nov 12 20:53:18.741197 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:18.745158 systemd[1]: Finished ignition-setup.service - Ignition (setup). Nov 12 20:53:18.757181 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Nov 12 20:53:18.777701 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:18.789154 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:18.810430 systemd-networkd[872]: lo: Link UP Nov 12 20:53:18.810439 systemd-networkd[872]: lo: Gained carrier Nov 12 20:53:18.813113 systemd-networkd[872]: Enumeration completed Nov 12 20:53:18.813575 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:18.817262 systemd[1]: Reached target network.target - Network. Nov 12 20:53:18.817422 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:18.817425 systemd-networkd[872]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:18.884981 kernel: mlx5_core e495:00:02.0 enP58517s1: Link up Nov 12 20:53:18.918983 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: Data path switched to VF: enP58517s1 Nov 12 20:53:18.919155 systemd-networkd[872]: enP58517s1: Link UP Nov 12 20:53:18.919322 systemd-networkd[872]: eth0: Link UP Nov 12 20:53:18.921336 systemd-networkd[872]: eth0: Gained carrier Nov 12 20:53:18.921353 systemd-networkd[872]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:18.930334 systemd-networkd[872]: enP58517s1: Gained carrier Nov 12 20:53:18.966002 systemd-networkd[872]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:53:19.551731 ignition[837]: Ignition 2.19.0 Nov 12 20:53:19.551746 ignition[837]: Stage: fetch-offline Nov 12 20:53:19.551790 ignition[837]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.559017 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:19.551802 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.552020 ignition[837]: parsed url from cmdline: "" Nov 12 20:53:19.552025 ignition[837]: no config URL provided Nov 12 20:53:19.552033 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:19.582274 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Nov 12 20:53:19.552045 ignition[837]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:19.552053 ignition[837]: failed to fetch config: resource requires networking Nov 12 20:53:19.552309 ignition[837]: Ignition finished successfully Nov 12 20:53:19.601430 ignition[880]: Ignition 2.19.0 Nov 12 20:53:19.601445 ignition[880]: Stage: fetch Nov 12 20:53:19.601665 ignition[880]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.601678 ignition[880]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.601783 ignition[880]: parsed url from cmdline: "" Nov 12 20:53:19.601788 ignition[880]: no config URL provided Nov 12 20:53:19.601793 ignition[880]: reading system config file "/usr/lib/ignition/user.ign" Nov 12 20:53:19.601802 ignition[880]: no config at "/usr/lib/ignition/user.ign" Nov 12 20:53:19.601824 ignition[880]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Nov 12 20:53:19.689586 ignition[880]: GET result: OK Nov 12 20:53:19.689691 ignition[880]: config has been read from IMDS userdata Nov 12 20:53:19.689721 ignition[880]: parsing config with SHA512: 98f8c1081d68f66cb1ea19b988e5bb1078126596ba85e9483bec1120930dfe92742d2a6da82f7c9bcac72c8fb7e6616f5eaeb722738002313148a92176ec3523 Nov 12 20:53:19.697206 unknown[880]: fetched base config from "system" Nov 12 20:53:19.699575 unknown[880]: fetched base config from "system" Nov 12 20:53:19.699588 unknown[880]: fetched user config from "azure" Nov 12 20:53:19.700119 ignition[880]: fetch: fetch complete Nov 12 20:53:19.701762 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Nov 12 20:53:19.700125 ignition[880]: fetch: fetch passed Nov 12 20:53:19.700174 ignition[880]: Ignition finished successfully Nov 12 20:53:19.715006 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Nov 12 20:53:19.729498 ignition[887]: Ignition 2.19.0 Nov 12 20:53:19.729508 ignition[887]: Stage: kargs Nov 12 20:53:19.729725 ignition[887]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.729736 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.731032 ignition[887]: kargs: kargs passed Nov 12 20:53:19.731077 ignition[887]: Ignition finished successfully Nov 12 20:53:19.739220 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Nov 12 20:53:19.750156 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Nov 12 20:53:19.765281 ignition[893]: Ignition 2.19.0 Nov 12 20:53:19.765291 ignition[893]: Stage: disks Nov 12 20:53:19.767221 systemd[1]: Finished ignition-disks.service - Ignition (disks). Nov 12 20:53:19.765520 ignition[893]: no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:19.771493 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:19.765529 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:19.766369 ignition[893]: disks: disks passed Nov 12 20:53:19.766418 ignition[893]: Ignition finished successfully Nov 12 20:53:19.787864 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Nov 12 20:53:19.790397 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:19.790433 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:19.790773 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:19.807378 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Nov 12 20:53:19.862213 systemd-fsck[901]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Nov 12 20:53:19.867648 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Nov 12 20:53:19.884072 systemd[1]: Mounting sysroot.mount - /sysroot... Nov 12 20:53:19.971361 kernel: EXT4-fs (sda9): mounted filesystem cc5635ac-cac6-420e-b789-89e3a937cfb2 r/w with ordered data mode. Quota mode: none. Nov 12 20:53:19.971972 systemd[1]: Mounted sysroot.mount - /sysroot. Nov 12 20:53:19.974545 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:20.014113 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:20.018849 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Nov 12 20:53:20.026110 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Nov 12 20:53:20.046339 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (912) Nov 12 20:53:20.046364 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:20.046379 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:20.046394 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:20.031405 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Nov 12 20:53:20.054768 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:20.031440 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:20.036133 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Nov 12 20:53:20.056168 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:20.069163 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Nov 12 20:53:20.416193 systemd-networkd[872]: enP58517s1: Gained IPv6LL Nov 12 20:53:20.575539 coreos-metadata[914]: Nov 12 20:53:20.575 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:53:20.582066 coreos-metadata[914]: Nov 12 20:53:20.578 INFO Fetch successful Nov 12 20:53:20.582066 coreos-metadata[914]: Nov 12 20:53:20.578 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:53:20.589382 coreos-metadata[914]: Nov 12 20:53:20.588 INFO Fetch successful Nov 12 20:53:20.601432 coreos-metadata[914]: Nov 12 20:53:20.601 INFO wrote hostname ci-4081.2.0-a-c73ec1ae7a to /sysroot/etc/hostname Nov 12 20:53:20.603095 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:20.655221 initrd-setup-root[942]: cut: /sysroot/etc/passwd: No such file or directory Nov 12 20:53:20.688871 initrd-setup-root[949]: cut: /sysroot/etc/group: No such file or directory Nov 12 20:53:20.708352 initrd-setup-root[956]: cut: /sysroot/etc/shadow: No such file or directory Nov 12 20:53:20.713231 initrd-setup-root[963]: cut: /sysroot/etc/gshadow: No such file or directory Nov 12 20:53:20.864228 systemd-networkd[872]: eth0: Gained IPv6LL Nov 12 20:53:21.597670 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:21.606163 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Nov 12 20:53:21.620444 kernel: BTRFS info (device sda6): last unmount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:21.617469 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Nov 12 20:53:21.626832 systemd[1]: sysroot-oem.mount: Deactivated successfully. Nov 12 20:53:21.650776 ignition[1030]: INFO : Ignition 2.19.0 Nov 12 20:53:21.650776 ignition[1030]: INFO : Stage: mount Nov 12 20:53:21.651261 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Nov 12 20:53:21.660113 ignition[1030]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:21.660113 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:21.660113 ignition[1030]: INFO : mount: mount passed Nov 12 20:53:21.660113 ignition[1030]: INFO : Ignition finished successfully Nov 12 20:53:21.660772 systemd[1]: Finished ignition-mount.service - Ignition (mount). Nov 12 20:53:21.679041 systemd[1]: Starting ignition-files.service - Ignition (files)... Nov 12 20:53:21.687352 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Nov 12 20:53:21.705509 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1042) Nov 12 20:53:21.705559 kernel: BTRFS info (device sda6): first mount of filesystem bdc43ff2-e8de-475f-88ba-e8c26a6bbaa6 Nov 12 20:53:21.706974 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Nov 12 20:53:21.710795 kernel: BTRFS info (device sda6): using free space tree Nov 12 20:53:21.716126 kernel: BTRFS info (device sda6): auto enabling async discard Nov 12 20:53:21.717537 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Nov 12 20:53:21.749895 ignition[1058]: INFO : Ignition 2.19.0 Nov 12 20:53:21.749895 ignition[1058]: INFO : Stage: files Nov 12 20:53:21.754545 ignition[1058]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:21.754545 ignition[1058]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:21.754545 ignition[1058]: DEBUG : files: compiled without relabeling support, skipping Nov 12 20:53:21.789305 ignition[1058]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Nov 12 20:53:21.789305 ignition[1058]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Nov 12 20:53:21.850023 ignition[1058]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Nov 12 20:53:21.853846 ignition[1058]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Nov 12 20:53:21.853846 ignition[1058]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Nov 12 20:53:21.850535 unknown[1058]: wrote ssh authorized keys file for user: core Nov 12 20:53:21.871931 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:21.878629 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Nov 12 20:53:21.927237 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Nov 12 20:53:22.063047 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Nov 12 20:53:22.069008 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:22.073388 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Nov 12 20:53:22.073388 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:22.081993 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:22.086680 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Nov 12 20:53:22.627693 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Nov 12 20:53:23.068514 ignition[1058]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Nov 12 20:53:23.068514 ignition[1058]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Nov 12 20:53:23.083787 ignition[1058]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Nov 12 20:53:23.088966 ignition[1058]: INFO : files: files passed Nov 12 20:53:23.088966 ignition[1058]: INFO : Ignition finished successfully Nov 12 20:53:23.100663 systemd[1]: Finished ignition-files.service - Ignition (files). Nov 12 20:53:23.122439 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Nov 12 20:53:23.133691 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Nov 12 20:53:23.140396 systemd[1]: ignition-quench.service: Deactivated successfully. Nov 12 20:53:23.147223 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Nov 12 20:53:23.160983 initrd-setup-root-after-ignition[1087]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:23.160983 initrd-setup-root-after-ignition[1087]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:23.168517 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Nov 12 20:53:23.173235 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:23.176775 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Nov 12 20:53:23.191099 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Nov 12 20:53:23.217923 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Nov 12 20:53:23.218061 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Nov 12 20:53:23.226781 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Nov 12 20:53:23.231761 systemd[1]: Reached target initrd.target - Initrd Default Target. Nov 12 20:53:23.236604 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Nov 12 20:53:23.247204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Nov 12 20:53:23.261163 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:23.271122 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Nov 12 20:53:23.283864 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:23.289506 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:23.295172 systemd[1]: Stopped target timers.target - Timer Units. Nov 12 20:53:23.295344 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Nov 12 20:53:23.295470 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Nov 12 20:53:23.296598 systemd[1]: Stopped target initrd.target - Initrd Default Target. Nov 12 20:53:23.297080 systemd[1]: Stopped target basic.target - Basic System. Nov 12 20:53:23.297448 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Nov 12 20:53:23.297854 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Nov 12 20:53:23.298291 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Nov 12 20:53:23.298995 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Nov 12 20:53:23.299378 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Nov 12 20:53:23.299796 systemd[1]: Stopped target sysinit.target - System Initialization. Nov 12 20:53:23.300221 systemd[1]: Stopped target local-fs.target - Local File Systems. Nov 12 20:53:23.389313 ignition[1111]: INFO : Ignition 2.19.0 Nov 12 20:53:23.389313 ignition[1111]: INFO : Stage: umount Nov 12 20:53:23.389313 ignition[1111]: INFO : no configs at "/usr/lib/ignition/base.d" Nov 12 20:53:23.389313 ignition[1111]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Nov 12 20:53:23.389313 ignition[1111]: INFO : umount: umount passed Nov 12 20:53:23.389313 ignition[1111]: INFO : Ignition finished successfully Nov 12 20:53:23.300657 systemd[1]: Stopped target swap.target - Swaps. Nov 12 20:53:23.301477 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Nov 12 20:53:23.301616 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Nov 12 20:53:23.302345 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:23.302738 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:23.303143 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Nov 12 20:53:23.335446 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:23.338469 systemd[1]: dracut-initqueue.service: Deactivated successfully. Nov 12 20:53:23.338633 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Nov 12 20:53:23.339796 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Nov 12 20:53:23.339938 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Nov 12 20:53:23.340748 systemd[1]: ignition-files.service: Deactivated successfully. Nov 12 20:53:23.340879 systemd[1]: Stopped ignition-files.service - Ignition (files). Nov 12 20:53:23.341145 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Nov 12 20:53:23.341265 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Nov 12 20:53:23.359702 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Nov 12 20:53:23.368115 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Nov 12 20:53:23.370291 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Nov 12 20:53:23.373116 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:23.381317 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Nov 12 20:53:23.381500 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Nov 12 20:53:23.395161 systemd[1]: ignition-mount.service: Deactivated successfully. Nov 12 20:53:23.395271 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Nov 12 20:53:23.400126 systemd[1]: ignition-disks.service: Deactivated successfully. Nov 12 20:53:23.400395 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Nov 12 20:53:23.407139 systemd[1]: ignition-kargs.service: Deactivated successfully. Nov 12 20:53:23.407192 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Nov 12 20:53:23.411306 systemd[1]: ignition-fetch.service: Deactivated successfully. Nov 12 20:53:23.413404 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Nov 12 20:53:23.490048 systemd[1]: Stopped target network.target - Network. Nov 12 20:53:23.492182 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Nov 12 20:53:23.492264 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Nov 12 20:53:23.502502 systemd[1]: Stopped target paths.target - Path Units. Nov 12 20:53:23.502622 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Nov 12 20:53:23.504825 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:23.509788 systemd[1]: Stopped target slices.target - Slice Units. Nov 12 20:53:23.512232 systemd[1]: Stopped target sockets.target - Socket Units. Nov 12 20:53:23.518062 systemd[1]: iscsid.socket: Deactivated successfully. Nov 12 20:53:23.522179 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Nov 12 20:53:23.524777 systemd[1]: iscsiuio.socket: Deactivated successfully. Nov 12 20:53:23.524832 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Nov 12 20:53:23.529851 systemd[1]: ignition-setup.service: Deactivated successfully. Nov 12 20:53:23.529922 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Nov 12 20:53:23.534895 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Nov 12 20:53:23.534946 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Nov 12 20:53:23.540129 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Nov 12 20:53:23.549369 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Nov 12 20:53:23.552013 systemd-networkd[872]: eth0: DHCPv6 lease lost Nov 12 20:53:23.554051 systemd[1]: sysroot-boot.mount: Deactivated successfully. Nov 12 20:53:23.554845 systemd[1]: initrd-cleanup.service: Deactivated successfully. Nov 12 20:53:23.554933 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Nov 12 20:53:23.564794 systemd[1]: systemd-networkd.service: Deactivated successfully. Nov 12 20:53:23.564898 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Nov 12 20:53:23.570058 systemd[1]: systemd-resolved.service: Deactivated successfully. Nov 12 20:53:23.570165 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Nov 12 20:53:23.576903 systemd[1]: sysroot-boot.service: Deactivated successfully. Nov 12 20:53:23.577052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Nov 12 20:53:23.592726 systemd[1]: systemd-networkd.socket: Deactivated successfully. Nov 12 20:53:23.592815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:23.597149 systemd[1]: initrd-setup-root.service: Deactivated successfully. Nov 12 20:53:23.597229 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Nov 12 20:53:23.620079 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Nov 12 20:53:23.624496 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Nov 12 20:53:23.624568 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Nov 12 20:53:23.632730 systemd[1]: systemd-sysctl.service: Deactivated successfully. Nov 12 20:53:23.632788 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:23.639836 systemd[1]: systemd-modules-load.service: Deactivated successfully. Nov 12 20:53:23.639894 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:23.651101 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Nov 12 20:53:23.651168 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:23.656861 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:23.673615 systemd[1]: systemd-udevd.service: Deactivated successfully. Nov 12 20:53:23.673782 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:23.679254 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Nov 12 20:53:23.679300 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:23.681987 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Nov 12 20:53:23.682024 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:23.682333 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Nov 12 20:53:23.682373 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Nov 12 20:53:23.683166 systemd[1]: dracut-cmdline.service: Deactivated successfully. Nov 12 20:53:23.683201 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Nov 12 20:53:23.721082 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: Data path switched from VF: enP58517s1 Nov 12 20:53:23.686189 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Nov 12 20:53:23.686233 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Nov 12 20:53:23.714659 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Nov 12 20:53:23.723608 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Nov 12 20:53:23.723684 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:23.723794 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Nov 12 20:53:23.723829 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:23.726751 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Nov 12 20:53:23.726795 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:23.727497 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Nov 12 20:53:23.727534 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:23.728453 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Nov 12 20:53:23.728536 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Nov 12 20:53:23.780838 systemd[1]: network-cleanup.service: Deactivated successfully. Nov 12 20:53:23.780980 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Nov 12 20:53:23.785624 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Nov 12 20:53:23.796125 systemd[1]: Starting initrd-switch-root.service - Switch Root... Nov 12 20:53:23.805716 systemd[1]: Switching root. Nov 12 20:53:23.871671 systemd-journald[176]: Journal stopped Nov 12 20:53:29.183769 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Nov 12 20:53:29.183820 kernel: SELinux: policy capability network_peer_controls=1 Nov 12 20:53:29.183839 kernel: SELinux: policy capability open_perms=1 Nov 12 20:53:29.183854 kernel: SELinux: policy capability extended_socket_class=1 Nov 12 20:53:29.183868 kernel: SELinux: policy capability always_check_network=0 Nov 12 20:53:29.183882 kernel: SELinux: policy capability cgroup_seclabel=1 Nov 12 20:53:29.183898 kernel: SELinux: policy capability nnp_nosuid_transition=1 Nov 12 20:53:29.183917 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Nov 12 20:53:29.183933 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Nov 12 20:53:29.183947 kernel: audit: type=1403 audit(1731444805.883:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Nov 12 20:53:29.184211 systemd[1]: Successfully loaded SELinux policy in 172.446ms. Nov 12 20:53:29.184228 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.107ms. Nov 12 20:53:29.184241 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Nov 12 20:53:29.184252 systemd[1]: Detected virtualization microsoft. Nov 12 20:53:29.184270 systemd[1]: Detected architecture x86-64. Nov 12 20:53:29.184281 systemd[1]: Detected first boot. Nov 12 20:53:29.184292 systemd[1]: Hostname set to . Nov 12 20:53:29.184304 systemd[1]: Initializing machine ID from random generator. Nov 12 20:53:29.184315 zram_generator::config[1154]: No configuration found. Nov 12 20:53:29.184330 systemd[1]: Populated /etc with preset unit settings. Nov 12 20:53:29.184340 systemd[1]: initrd-switch-root.service: Deactivated successfully. Nov 12 20:53:29.184352 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Nov 12 20:53:29.184361 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:29.184375 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Nov 12 20:53:29.184385 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Nov 12 20:53:29.184397 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Nov 12 20:53:29.184412 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Nov 12 20:53:29.184422 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Nov 12 20:53:29.184434 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Nov 12 20:53:29.184444 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Nov 12 20:53:29.184457 systemd[1]: Created slice user.slice - User and Session Slice. Nov 12 20:53:29.184468 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Nov 12 20:53:29.184481 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Nov 12 20:53:29.184491 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Nov 12 20:53:29.184505 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Nov 12 20:53:29.184516 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Nov 12 20:53:29.184528 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Nov 12 20:53:29.184539 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Nov 12 20:53:29.184550 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Nov 12 20:53:29.184561 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Nov 12 20:53:29.184575 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Nov 12 20:53:29.184588 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Nov 12 20:53:29.184601 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Nov 12 20:53:29.184613 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Nov 12 20:53:29.184623 systemd[1]: Reached target remote-fs.target - Remote File Systems. Nov 12 20:53:29.184637 systemd[1]: Reached target slices.target - Slice Units. Nov 12 20:53:29.184650 systemd[1]: Reached target swap.target - Swaps. Nov 12 20:53:29.184662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Nov 12 20:53:29.184673 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Nov 12 20:53:29.184688 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Nov 12 20:53:29.184699 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Nov 12 20:53:29.184710 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Nov 12 20:53:29.184724 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Nov 12 20:53:29.184735 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Nov 12 20:53:29.184749 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Nov 12 20:53:29.184762 systemd[1]: Mounting media.mount - External Media Directory... Nov 12 20:53:29.184773 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:29.184786 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Nov 12 20:53:29.184796 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Nov 12 20:53:29.184808 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Nov 12 20:53:29.184820 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Nov 12 20:53:29.184832 systemd[1]: Reached target machines.target - Containers. Nov 12 20:53:29.184847 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Nov 12 20:53:29.184858 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:29.184871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Nov 12 20:53:29.184881 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Nov 12 20:53:29.184894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:29.184905 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:29.184917 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:29.184928 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Nov 12 20:53:29.184940 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:29.184955 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Nov 12 20:53:29.184984 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Nov 12 20:53:29.184995 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Nov 12 20:53:29.185008 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Nov 12 20:53:29.185018 systemd[1]: Stopped systemd-fsck-usr.service. Nov 12 20:53:29.185028 systemd[1]: Starting systemd-journald.service - Journal Service... Nov 12 20:53:29.185038 kernel: fuse: init (API version 7.39) Nov 12 20:53:29.185047 kernel: loop: module loaded Nov 12 20:53:29.185062 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Nov 12 20:53:29.185074 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Nov 12 20:53:29.185086 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Nov 12 20:53:29.185098 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Nov 12 20:53:29.185109 kernel: ACPI: bus type drm_connector registered Nov 12 20:53:29.185143 systemd-journald[1243]: Collecting audit messages is disabled. Nov 12 20:53:29.185170 systemd[1]: verity-setup.service: Deactivated successfully. Nov 12 20:53:29.185184 systemd[1]: Stopped verity-setup.service. Nov 12 20:53:29.185199 systemd-journald[1243]: Journal started Nov 12 20:53:29.185226 systemd-journald[1243]: Runtime Journal (/run/log/journal/8ba3422ba7354c68b19723180b4a8115) is 8.0M, max 158.8M, 150.8M free. Nov 12 20:53:28.538848 systemd[1]: Queued start job for default target multi-user.target. Nov 12 20:53:28.610827 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Nov 12 20:53:28.611230 systemd[1]: systemd-journald.service: Deactivated successfully. Nov 12 20:53:29.191990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:29.198804 systemd[1]: Started systemd-journald.service - Journal Service. Nov 12 20:53:29.199406 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Nov 12 20:53:29.206679 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Nov 12 20:53:29.209977 systemd[1]: Mounted media.mount - External Media Directory. Nov 12 20:53:29.212534 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Nov 12 20:53:29.215664 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Nov 12 20:53:29.218759 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Nov 12 20:53:29.221640 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Nov 12 20:53:29.227858 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Nov 12 20:53:29.236166 systemd[1]: modprobe@configfs.service: Deactivated successfully. Nov 12 20:53:29.236493 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Nov 12 20:53:29.240121 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:29.240443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:29.243639 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:29.243944 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:29.247364 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:29.247644 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:29.251356 systemd[1]: modprobe@fuse.service: Deactivated successfully. Nov 12 20:53:29.251648 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Nov 12 20:53:29.254902 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:29.255266 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:29.258497 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Nov 12 20:53:29.261819 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Nov 12 20:53:29.265523 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Nov 12 20:53:29.278617 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Nov 12 20:53:29.285984 systemd[1]: Reached target network-pre.target - Preparation for Network. Nov 12 20:53:29.296073 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Nov 12 20:53:29.301071 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Nov 12 20:53:29.304048 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Nov 12 20:53:29.304095 systemd[1]: Reached target local-fs.target - Local File Systems. Nov 12 20:53:29.308325 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Nov 12 20:53:29.316114 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Nov 12 20:53:29.319956 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Nov 12 20:53:29.322575 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:29.366160 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Nov 12 20:53:29.370395 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Nov 12 20:53:29.373424 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:29.377081 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Nov 12 20:53:29.379715 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:29.382262 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Nov 12 20:53:29.387217 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Nov 12 20:53:29.397188 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Nov 12 20:53:29.404313 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Nov 12 20:53:29.410949 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Nov 12 20:53:29.414931 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Nov 12 20:53:29.422552 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Nov 12 20:53:29.425704 systemd-journald[1243]: Time spent on flushing to /var/log/journal/8ba3422ba7354c68b19723180b4a8115 is 36.194ms for 963 entries. Nov 12 20:53:29.425704 systemd-journald[1243]: System Journal (/var/log/journal/8ba3422ba7354c68b19723180b4a8115) is 8.0M, max 2.6G, 2.6G free. Nov 12 20:53:29.488791 systemd-journald[1243]: Received client request to flush runtime journal. Nov 12 20:53:29.488831 kernel: loop0: detected capacity change from 0 to 31056 Nov 12 20:53:29.436617 udevadm[1293]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Nov 12 20:53:29.446831 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Nov 12 20:53:29.450781 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Nov 12 20:53:29.463878 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Nov 12 20:53:29.489981 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Nov 12 20:53:29.515602 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Nov 12 20:53:29.515628 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. Nov 12 20:53:29.522632 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Nov 12 20:53:29.532184 systemd[1]: Starting systemd-sysusers.service - Create System Users... Nov 12 20:53:29.536830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Nov 12 20:53:29.537887 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Nov 12 20:53:29.574764 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Nov 12 20:53:29.753523 systemd[1]: Finished systemd-sysusers.service - Create System Users. Nov 12 20:53:29.768115 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Nov 12 20:53:29.785807 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Nov 12 20:53:29.785835 systemd-tmpfiles[1310]: ACLs are not supported, ignoring. Nov 12 20:53:29.791121 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Nov 12 20:53:29.870182 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Nov 12 20:53:29.906482 kernel: loop1: detected capacity change from 0 to 210664 Nov 12 20:53:29.975992 kernel: loop2: detected capacity change from 0 to 142488 Nov 12 20:53:30.458993 kernel: loop3: detected capacity change from 0 to 140768 Nov 12 20:53:30.587499 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Nov 12 20:53:30.601012 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Nov 12 20:53:30.630511 systemd-udevd[1318]: Using default interface naming scheme 'v255'. Nov 12 20:53:30.905991 kernel: loop4: detected capacity change from 0 to 31056 Nov 12 20:53:30.914985 kernel: loop5: detected capacity change from 0 to 210664 Nov 12 20:53:30.924985 kernel: loop6: detected capacity change from 0 to 142488 Nov 12 20:53:30.936978 kernel: loop7: detected capacity change from 0 to 140768 Nov 12 20:53:30.946124 (sd-merge)[1320]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Nov 12 20:53:30.946700 (sd-merge)[1320]: Merged extensions into '/usr'. Nov 12 20:53:30.948753 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Nov 12 20:53:30.981368 systemd[1]: Reloading requested from client PID 1291 ('systemd-sysext') (unit systemd-sysext.service)... Nov 12 20:53:30.981390 systemd[1]: Reloading... Nov 12 20:53:31.108627 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1323) Nov 12 20:53:31.114981 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1323) Nov 12 20:53:31.127071 zram_generator::config[1374]: No configuration found. Nov 12 20:53:31.168053 kernel: hv_vmbus: registering driver hv_balloon Nov 12 20:53:31.168152 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Nov 12 20:53:31.191990 kernel: hv_vmbus: registering driver hyperv_fb Nov 12 20:53:31.214255 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Nov 12 20:53:31.222982 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Nov 12 20:53:31.232871 kernel: Console: switching to colour dummy device 80x25 Nov 12 20:53:31.237030 kernel: mousedev: PS/2 mouse device common for all mice Nov 12 20:53:31.237112 kernel: Console: switching to colour frame buffer device 128x48 Nov 12 20:53:31.407986 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1339) Nov 12 20:53:31.599554 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:31.672984 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Nov 12 20:53:31.736496 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Nov 12 20:53:31.737019 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Nov 12 20:53:31.740195 systemd[1]: Reloading finished in 758 ms. Nov 12 20:53:31.769951 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Nov 12 20:53:31.817222 systemd[1]: Starting ensure-sysext.service... Nov 12 20:53:31.822194 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Nov 12 20:53:31.836114 systemd[1]: Starting systemd-networkd.service - Network Configuration... Nov 12 20:53:31.841133 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Nov 12 20:53:31.846178 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Nov 12 20:53:31.849522 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Nov 12 20:53:31.876293 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Nov 12 20:53:31.881502 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Nov 12 20:53:31.886893 systemd[1]: Reloading requested from client PID 1478 ('systemctl') (unit ensure-sysext.service)... Nov 12 20:53:31.886928 systemd[1]: Reloading... Nov 12 20:53:31.891182 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Nov 12 20:53:31.891862 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Nov 12 20:53:31.893120 systemd-tmpfiles[1481]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Nov 12 20:53:31.893530 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Nov 12 20:53:31.893609 systemd-tmpfiles[1481]: ACLs are not supported, ignoring. Nov 12 20:53:31.898723 systemd-tmpfiles[1481]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:31.898735 systemd-tmpfiles[1481]: Skipping /boot Nov 12 20:53:31.920346 systemd-tmpfiles[1481]: Detected autofs mount point /boot during canonicalization of boot. Nov 12 20:53:31.920471 systemd-tmpfiles[1481]: Skipping /boot Nov 12 20:53:31.973993 zram_generator::config[1513]: No configuration found. Nov 12 20:53:31.990269 lvm[1487]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:32.125496 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:32.202798 systemd[1]: Reloading finished in 315 ms. Nov 12 20:53:32.225496 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Nov 12 20:53:32.229793 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Nov 12 20:53:32.240489 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Nov 12 20:53:32.243545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:32.249277 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:53:32.271263 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Nov 12 20:53:32.274345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:32.277196 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Nov 12 20:53:32.287226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:32.293857 lvm[1581]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Nov 12 20:53:32.295066 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:32.304236 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:32.307153 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:32.309149 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Nov 12 20:53:32.320283 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Nov 12 20:53:32.328298 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Nov 12 20:53:32.339268 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Nov 12 20:53:32.341902 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:32.345019 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Nov 12 20:53:32.356311 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:32.356515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:32.359993 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:32.360181 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:32.364134 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:32.364886 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:32.373155 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:32.373368 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:32.380263 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:32.380606 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:32.391157 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:32.404468 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:32.414352 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:32.420365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:32.420613 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:32.422744 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:32.423670 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:32.430706 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:32.430891 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:32.439358 systemd[1]: Started systemd-userdbd.service - User Database Manager. Nov 12 20:53:32.442750 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Nov 12 20:53:32.446274 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:32.446394 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:32.453312 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:32.453750 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Nov 12 20:53:32.459765 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Nov 12 20:53:32.470245 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Nov 12 20:53:32.483336 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Nov 12 20:53:32.491102 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Nov 12 20:53:32.494711 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Nov 12 20:53:32.495228 systemd[1]: Reached target time-set.target - System Time Set. Nov 12 20:53:32.498371 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Nov 12 20:53:32.502012 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Nov 12 20:53:32.505859 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Nov 12 20:53:32.506427 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Nov 12 20:53:32.511136 augenrules[1622]: No rules Nov 12 20:53:32.514003 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:53:32.528170 systemd[1]: Finished ensure-sysext.service. Nov 12 20:53:32.537117 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Nov 12 20:53:32.543912 systemd[1]: modprobe@drm.service: Deactivated successfully. Nov 12 20:53:32.544329 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Nov 12 20:53:32.547475 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Nov 12 20:53:32.547658 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Nov 12 20:53:32.554119 systemd[1]: modprobe@loop.service: Deactivated successfully. Nov 12 20:53:32.554317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Nov 12 20:53:32.562468 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Nov 12 20:53:32.562564 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Nov 12 20:53:32.588833 systemd-networkd[1480]: lo: Link UP Nov 12 20:53:32.588843 systemd-networkd[1480]: lo: Gained carrier Nov 12 20:53:32.591479 systemd-networkd[1480]: Enumeration completed Nov 12 20:53:32.591713 systemd[1]: Started systemd-networkd.service - Network Configuration. Nov 12 20:53:32.591902 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:32.591906 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:32.601445 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Nov 12 20:53:32.628393 systemd-resolved[1588]: Positive Trust Anchors: Nov 12 20:53:32.628411 systemd-resolved[1588]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Nov 12 20:53:32.628445 systemd-resolved[1588]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Nov 12 20:53:32.645308 systemd-resolved[1588]: Using system hostname 'ci-4081.2.0-a-c73ec1ae7a'. Nov 12 20:53:32.648979 kernel: mlx5_core e495:00:02.0 enP58517s1: Link up Nov 12 20:53:32.671002 kernel: hv_netvsc 000d3ab2-9bb2-000d-3ab2-9bb2000d3ab2 eth0: Data path switched to VF: enP58517s1 Nov 12 20:53:32.672178 systemd-networkd[1480]: enP58517s1: Link UP Nov 12 20:53:32.672384 systemd-networkd[1480]: eth0: Link UP Nov 12 20:53:32.672391 systemd-networkd[1480]: eth0: Gained carrier Nov 12 20:53:32.672418 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:32.675537 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Nov 12 20:53:32.678718 systemd-networkd[1480]: enP58517s1: Gained carrier Nov 12 20:53:32.678732 systemd[1]: Reached target network.target - Network. Nov 12 20:53:32.681228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Nov 12 20:53:32.706037 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:53:32.830881 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Nov 12 20:53:32.834736 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Nov 12 20:53:33.728246 systemd-networkd[1480]: enP58517s1: Gained IPv6LL Nov 12 20:53:34.624125 systemd-networkd[1480]: eth0: Gained IPv6LL Nov 12 20:53:34.627429 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Nov 12 20:53:34.631174 systemd[1]: Reached target network-online.target - Network is Online. Nov 12 20:53:34.865721 ldconfig[1286]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Nov 12 20:53:34.875125 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Nov 12 20:53:34.889163 systemd[1]: Starting systemd-update-done.service - Update is Completed... Nov 12 20:53:34.899317 systemd[1]: Finished systemd-update-done.service - Update is Completed. Nov 12 20:53:34.902590 systemd[1]: Reached target sysinit.target - System Initialization. Nov 12 20:53:34.905383 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Nov 12 20:53:34.908660 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Nov 12 20:53:34.911905 systemd[1]: Started logrotate.timer - Daily rotation of log files. Nov 12 20:53:34.917207 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Nov 12 20:53:34.920237 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Nov 12 20:53:34.923383 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Nov 12 20:53:34.923424 systemd[1]: Reached target paths.target - Path Units. Nov 12 20:53:34.925652 systemd[1]: Reached target timers.target - Timer Units. Nov 12 20:53:34.928420 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Nov 12 20:53:34.932354 systemd[1]: Starting docker.socket - Docker Socket for the API... Nov 12 20:53:34.942841 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Nov 12 20:53:34.946037 systemd[1]: Listening on docker.socket - Docker Socket for the API. Nov 12 20:53:34.948745 systemd[1]: Reached target sockets.target - Socket Units. Nov 12 20:53:34.951006 systemd[1]: Reached target basic.target - Basic System. Nov 12 20:53:34.953302 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:34.953336 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Nov 12 20:53:34.991150 systemd[1]: Starting chronyd.service - NTP client/server... Nov 12 20:53:34.998131 systemd[1]: Starting containerd.service - containerd container runtime... Nov 12 20:53:35.006154 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Nov 12 20:53:35.017232 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Nov 12 20:53:35.033467 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Nov 12 20:53:35.040131 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Nov 12 20:53:35.042838 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Nov 12 20:53:35.042887 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). Nov 12 20:53:35.045463 jq[1651]: false Nov 12 20:53:35.046365 (chronyd)[1647]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Nov 12 20:53:35.046779 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. Nov 12 20:53:35.049882 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). Nov 12 20:53:35.056140 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:35.061166 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Nov 12 20:53:35.066486 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Nov 12 20:53:35.080385 KVP[1655]: KVP starting; pid is:1655 Nov 12 20:53:35.081040 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Nov 12 20:53:35.082764 chronyd[1663]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Nov 12 20:53:35.085098 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Nov 12 20:53:35.097597 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Nov 12 20:53:35.105218 extend-filesystems[1654]: Found loop4 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found loop5 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found loop6 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found loop7 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda1 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda2 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda3 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found usr Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda4 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda6 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda7 Nov 12 20:53:35.107062 extend-filesystems[1654]: Found sda9 Nov 12 20:53:35.107062 extend-filesystems[1654]: Checking size of /dev/sda9 Nov 12 20:53:35.177567 kernel: hv_utils: KVP IC version 4.0 Nov 12 20:53:35.121995 KVP[1655]: KVP LIC Version: 3.1 Nov 12 20:53:35.117888 systemd[1]: Starting systemd-logind.service - User Login Management... Nov 12 20:53:35.177847 extend-filesystems[1654]: Old size kept for /dev/sda9 Nov 12 20:53:35.177847 extend-filesystems[1654]: Found sr0 Nov 12 20:53:35.133657 chronyd[1663]: Timezone right/UTC failed leap second check, ignoring Nov 12 20:53:35.124385 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Nov 12 20:53:35.133899 chronyd[1663]: Loaded seccomp filter (level 2) Nov 12 20:53:35.125128 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Nov 12 20:53:35.133513 systemd[1]: Starting update-engine.service - Update Engine... Nov 12 20:53:35.200672 update_engine[1673]: I20241112 20:53:35.190872 1673 main.cc:92] Flatcar Update Engine starting Nov 12 20:53:35.151315 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Nov 12 20:53:35.180346 systemd[1]: Started chronyd.service - NTP client/server. Nov 12 20:53:35.210045 jq[1678]: true Nov 12 20:53:35.193078 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Nov 12 20:53:35.193348 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Nov 12 20:53:35.193711 systemd[1]: extend-filesystems.service: Deactivated successfully. Nov 12 20:53:35.194847 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Nov 12 20:53:35.205750 systemd[1]: motdgen.service: Deactivated successfully. Nov 12 20:53:35.207063 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Nov 12 20:53:35.211465 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Nov 12 20:53:35.213067 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Nov 12 20:53:35.220891 dbus-daemon[1650]: [system] SELinux support is enabled Nov 12 20:53:35.223474 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Nov 12 20:53:35.230108 systemd[1]: Started dbus.service - D-Bus System Message Bus. Nov 12 20:53:35.233049 jq[1691]: true Nov 12 20:53:35.245849 update_engine[1673]: I20241112 20:53:35.245785 1673 update_check_scheduler.cc:74] Next update check in 11m5s Nov 12 20:53:35.280182 (ntainerd)[1692]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Nov 12 20:53:35.288955 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Nov 12 20:53:35.289034 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Nov 12 20:53:35.294297 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Nov 12 20:53:35.294323 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Nov 12 20:53:35.298536 systemd[1]: Started update-engine.service - Update Engine. Nov 12 20:53:35.314429 systemd[1]: Started locksmithd.service - Cluster reboot manager. Nov 12 20:53:35.332022 systemd-logind[1670]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Nov 12 20:53:35.332274 systemd-logind[1670]: New seat seat0. Nov 12 20:53:35.333173 systemd[1]: Started systemd-logind.service - User Login Management. Nov 12 20:53:35.360367 coreos-metadata[1649]: Nov 12 20:53:35.359 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Nov 12 20:53:35.366272 tar[1689]: linux-amd64/helm Nov 12 20:53:35.366580 coreos-metadata[1649]: Nov 12 20:53:35.366 INFO Fetch successful Nov 12 20:53:35.366580 coreos-metadata[1649]: Nov 12 20:53:35.366 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Nov 12 20:53:35.374402 coreos-metadata[1649]: Nov 12 20:53:35.371 INFO Fetch successful Nov 12 20:53:35.374402 coreos-metadata[1649]: Nov 12 20:53:35.374 INFO Fetching http://168.63.129.16/machine/6de7fd1e-4278-4f83-8b6d-e1b4b803792a/899ef86e%2D7be9%2D4700%2D9b85%2Dd3eb53469aad.%5Fci%2D4081.2.0%2Da%2Dc73ec1ae7a?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Nov 12 20:53:35.380168 coreos-metadata[1649]: Nov 12 20:53:35.380 INFO Fetch successful Nov 12 20:53:35.382601 coreos-metadata[1649]: Nov 12 20:53:35.382 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Nov 12 20:53:35.408786 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (1734) Nov 12 20:53:35.408862 bash[1726]: Updated "/home/core/.ssh/authorized_keys" Nov 12 20:53:35.409030 coreos-metadata[1649]: Nov 12 20:53:35.405 INFO Fetch successful Nov 12 20:53:35.409186 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Nov 12 20:53:35.425146 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Nov 12 20:53:35.489830 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Nov 12 20:53:35.529863 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Nov 12 20:53:35.638098 sshd_keygen[1687]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Nov 12 20:53:35.667583 locksmithd[1719]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Nov 12 20:53:35.679510 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Nov 12 20:53:35.695347 systemd[1]: Starting issuegen.service - Generate /run/issue... Nov 12 20:53:35.699029 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Nov 12 20:53:35.741380 systemd[1]: issuegen.service: Deactivated successfully. Nov 12 20:53:35.741603 systemd[1]: Finished issuegen.service - Generate /run/issue. Nov 12 20:53:35.753583 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Nov 12 20:53:35.784332 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Nov 12 20:53:35.797124 systemd[1]: Started getty@tty1.service - Getty on tty1. Nov 12 20:53:35.807289 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Nov 12 20:53:35.810873 systemd[1]: Reached target getty.target - Login Prompts. Nov 12 20:53:35.822182 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Nov 12 20:53:36.180084 tar[1689]: linux-amd64/LICENSE Nov 12 20:53:36.180084 tar[1689]: linux-amd64/README.md Nov 12 20:53:36.191769 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Nov 12 20:53:36.375980 containerd[1692]: time="2024-11-12T20:53:36.375197400Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Nov 12 20:53:36.411490 containerd[1692]: time="2024-11-12T20:53:36.411425600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413365800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.60-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413406600Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413428000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413598600Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413619300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413697200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413713900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413922400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413945300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413983800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:36.414654 containerd[1692]: time="2024-11-12T20:53:36.413999200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.415910 containerd[1692]: time="2024-11-12T20:53:36.414170100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.415910 containerd[1692]: time="2024-11-12T20:53:36.414462500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Nov 12 20:53:36.415910 containerd[1692]: time="2024-11-12T20:53:36.414711000Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Nov 12 20:53:36.415910 containerd[1692]: time="2024-11-12T20:53:36.414736600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Nov 12 20:53:36.415910 containerd[1692]: time="2024-11-12T20:53:36.414999600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Nov 12 20:53:36.415910 containerd[1692]: time="2024-11-12T20:53:36.415186200Z" level=info msg="metadata content store policy set" policy=shared Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.427497700Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.427559000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.427581300Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.427610300Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.427630600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.427784200Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428094700Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428211400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428232800Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428251600Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428271400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428290200Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428307700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.428893 containerd[1692]: time="2024-11-12T20:53:36.428326100Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428345700Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428364800Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428390700Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428410600Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428437500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428455900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428472500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428489900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428507100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428525300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428542000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428559300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428577400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429420 containerd[1692]: time="2024-11-12T20:53:36.428598100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428615100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428640200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428662400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428687400Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428725600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428745900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428763100Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428820200Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428843800Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428859400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428877400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428891400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428916000Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Nov 12 20:53:36.429911 containerd[1692]: time="2024-11-12T20:53:36.428932600Z" level=info msg="NRI interface is disabled by configuration." Nov 12 20:53:36.430396 containerd[1692]: time="2024-11-12T20:53:36.428945700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Nov 12 20:53:36.430457 containerd[1692]: time="2024-11-12T20:53:36.429327000Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Nov 12 20:53:36.430457 containerd[1692]: time="2024-11-12T20:53:36.429415800Z" level=info msg="Connect containerd service" Nov 12 20:53:36.430457 containerd[1692]: time="2024-11-12T20:53:36.429469800Z" level=info msg="using legacy CRI server" Nov 12 20:53:36.430457 containerd[1692]: time="2024-11-12T20:53:36.429480100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Nov 12 20:53:36.430457 containerd[1692]: time="2024-11-12T20:53:36.429588900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Nov 12 20:53:36.430780 containerd[1692]: time="2024-11-12T20:53:36.430451900Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:53:36.430780 containerd[1692]: time="2024-11-12T20:53:36.430674100Z" level=info msg="Start subscribing containerd event" Nov 12 20:53:36.430780 containerd[1692]: time="2024-11-12T20:53:36.430728700Z" level=info msg="Start recovering state" Nov 12 20:53:36.430893 containerd[1692]: time="2024-11-12T20:53:36.430801600Z" level=info msg="Start event monitor" Nov 12 20:53:36.430893 containerd[1692]: time="2024-11-12T20:53:36.430819500Z" level=info msg="Start snapshots syncer" Nov 12 20:53:36.430893 containerd[1692]: time="2024-11-12T20:53:36.430831200Z" level=info msg="Start cni network conf syncer for default" Nov 12 20:53:36.430893 containerd[1692]: time="2024-11-12T20:53:36.430842000Z" level=info msg="Start streaming server" Nov 12 20:53:36.433727 containerd[1692]: time="2024-11-12T20:53:36.431367500Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Nov 12 20:53:36.433727 containerd[1692]: time="2024-11-12T20:53:36.431438000Z" level=info msg=serving... address=/run/containerd/containerd.sock Nov 12 20:53:36.432152 systemd[1]: Started containerd.service - containerd container runtime. Nov 12 20:53:36.441985 containerd[1692]: time="2024-11-12T20:53:36.439716400Z" level=info msg="containerd successfully booted in 0.067087s" Nov 12 20:53:36.745182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:36.748616 systemd[1]: Reached target multi-user.target - Multi-User System. Nov 12 20:53:36.751509 systemd[1]: Startup finished in 834ms (firmware) + 25.824s (loader) + 982ms (kernel) + 10.839s (initrd) + 11.038s (userspace) = 49.520s. Nov 12 20:53:36.757352 (kubelet)[1813]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:37.039649 login[1794]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:53:37.040452 login[1793]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Nov 12 20:53:37.055062 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Nov 12 20:53:37.071269 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Nov 12 20:53:37.074001 systemd-logind[1670]: New session 2 of user core. Nov 12 20:53:37.076692 systemd-logind[1670]: New session 1 of user core. Nov 12 20:53:37.115907 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Nov 12 20:53:37.123320 systemd[1]: Starting user@500.service - User Manager for UID 500... Nov 12 20:53:37.137998 (systemd)[1824]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Nov 12 20:53:37.391095 systemd[1824]: Queued start job for default target default.target. Nov 12 20:53:37.397147 systemd[1824]: Created slice app.slice - User Application Slice. Nov 12 20:53:37.397184 systemd[1824]: Reached target paths.target - Paths. Nov 12 20:53:37.397203 systemd[1824]: Reached target timers.target - Timers. Nov 12 20:53:37.398723 systemd[1824]: Starting dbus.socket - D-Bus User Message Bus Socket... Nov 12 20:53:37.413362 systemd[1824]: Listening on dbus.socket - D-Bus User Message Bus Socket. Nov 12 20:53:37.414783 systemd[1824]: Reached target sockets.target - Sockets. Nov 12 20:53:37.414809 systemd[1824]: Reached target basic.target - Basic System. Nov 12 20:53:37.414859 systemd[1824]: Reached target default.target - Main User Target. Nov 12 20:53:37.414893 systemd[1824]: Startup finished in 268ms. Nov 12 20:53:37.415053 systemd[1]: Started user@500.service - User Manager for UID 500. Nov 12 20:53:37.420872 systemd[1]: Started session-1.scope - Session 1 of User core. Nov 12 20:53:37.421856 systemd[1]: Started session-2.scope - Session 2 of User core. Nov 12 20:53:37.599701 kubelet[1813]: E1112 20:53:37.599639 1813 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:37.602287 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:37.602489 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:37.766691 waagent[1795]: 2024-11-12T20:53:37.766518Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.767017Z INFO Daemon Daemon OS: flatcar 4081.2.0 Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.768123Z INFO Daemon Daemon Python: 3.11.9 Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.769183Z INFO Daemon Daemon Run daemon Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.769992Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4081.2.0' Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.770353Z INFO Daemon Daemon Using waagent for provisioning Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.771357Z INFO Daemon Daemon Activate resource disk Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.771917Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.776292Z INFO Daemon Daemon Found device: None Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.777234Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.778017Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.780283Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 12 20:53:37.799607 waagent[1795]: 2024-11-12T20:53:37.781169Z INFO Daemon Daemon Running default provisioning handler Nov 12 20:53:37.802838 waagent[1795]: 2024-11-12T20:53:37.802768Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Nov 12 20:53:37.808768 waagent[1795]: 2024-11-12T20:53:37.808717Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Nov 12 20:53:37.816625 waagent[1795]: 2024-11-12T20:53:37.808898Z INFO Daemon Daemon cloud-init is enabled: False Nov 12 20:53:37.816625 waagent[1795]: 2024-11-12T20:53:37.809773Z INFO Daemon Daemon Copying ovf-env.xml Nov 12 20:53:37.879693 waagent[1795]: 2024-11-12T20:53:37.877189Z INFO Daemon Daemon Successfully mounted dvd Nov 12 20:53:37.891635 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Nov 12 20:53:37.893801 waagent[1795]: 2024-11-12T20:53:37.893170Z INFO Daemon Daemon Detect protocol endpoint Nov 12 20:53:37.895765 waagent[1795]: 2024-11-12T20:53:37.895708Z INFO Daemon Daemon Clean protocol and wireserver endpoint Nov 12 20:53:37.907311 waagent[1795]: 2024-11-12T20:53:37.896042Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Nov 12 20:53:37.907311 waagent[1795]: 2024-11-12T20:53:37.896771Z INFO Daemon Daemon Test for route to 168.63.129.16 Nov 12 20:53:37.907311 waagent[1795]: 2024-11-12T20:53:37.897693Z INFO Daemon Daemon Route to 168.63.129.16 exists Nov 12 20:53:37.907311 waagent[1795]: 2024-11-12T20:53:37.898399Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Nov 12 20:53:37.919897 waagent[1795]: 2024-11-12T20:53:37.919847Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Nov 12 20:53:37.927132 waagent[1795]: 2024-11-12T20:53:37.920274Z INFO Daemon Daemon Wire protocol version:2012-11-30 Nov 12 20:53:37.927132 waagent[1795]: 2024-11-12T20:53:37.921005Z INFO Daemon Daemon Server preferred version:2015-04-05 Nov 12 20:53:38.041138 waagent[1795]: 2024-11-12T20:53:38.040974Z INFO Daemon Daemon Initializing goal state during protocol detection Nov 12 20:53:38.044709 waagent[1795]: 2024-11-12T20:53:38.044632Z INFO Daemon Daemon Forcing an update of the goal state. Nov 12 20:53:38.050882 waagent[1795]: 2024-11-12T20:53:38.050823Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 12 20:53:38.067499 waagent[1795]: 2024-11-12T20:53:38.067430Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.159 Nov 12 20:53:38.082023 waagent[1795]: 2024-11-12T20:53:38.068206Z INFO Daemon Nov 12 20:53:38.082023 waagent[1795]: 2024-11-12T20:53:38.069075Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: f429cc22-d53d-43bb-98c9-2cb27e9d2f9c eTag: 2567457109626726097 source: Fabric] Nov 12 20:53:38.082023 waagent[1795]: 2024-11-12T20:53:38.070209Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Nov 12 20:53:38.082023 waagent[1795]: 2024-11-12T20:53:38.070777Z INFO Daemon Nov 12 20:53:38.082023 waagent[1795]: 2024-11-12T20:53:38.071535Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Nov 12 20:53:38.084816 waagent[1795]: 2024-11-12T20:53:38.084766Z INFO Daemon Daemon Downloading artifacts profile blob Nov 12 20:53:38.163250 waagent[1795]: 2024-11-12T20:53:38.163158Z INFO Daemon Downloaded certificate {'thumbprint': '35328F014D16FD07A5CDF3330A05689B70A7295D', 'hasPrivateKey': False} Nov 12 20:53:38.176282 waagent[1795]: 2024-11-12T20:53:38.163879Z INFO Daemon Downloaded certificate {'thumbprint': '0084BA97CA7AFA5187FC7EC0F0B6EBC0FC488211', 'hasPrivateKey': True} Nov 12 20:53:38.176282 waagent[1795]: 2024-11-12T20:53:38.164800Z INFO Daemon Fetch goal state completed Nov 12 20:53:38.176282 waagent[1795]: 2024-11-12T20:53:38.172550Z INFO Daemon Daemon Starting provisioning Nov 12 20:53:38.176282 waagent[1795]: 2024-11-12T20:53:38.173384Z INFO Daemon Daemon Handle ovf-env.xml. Nov 12 20:53:38.176282 waagent[1795]: 2024-11-12T20:53:38.174362Z INFO Daemon Daemon Set hostname [ci-4081.2.0-a-c73ec1ae7a] Nov 12 20:53:38.193936 waagent[1795]: 2024-11-12T20:53:38.193857Z INFO Daemon Daemon Publish hostname [ci-4081.2.0-a-c73ec1ae7a] Nov 12 20:53:38.201250 waagent[1795]: 2024-11-12T20:53:38.194417Z INFO Daemon Daemon Examine /proc/net/route for primary interface Nov 12 20:53:38.201250 waagent[1795]: 2024-11-12T20:53:38.194821Z INFO Daemon Daemon Primary interface is [eth0] Nov 12 20:53:38.233545 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Nov 12 20:53:38.233557 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Nov 12 20:53:38.233592 systemd-networkd[1480]: eth0: DHCP lease lost Nov 12 20:53:38.234938 waagent[1795]: 2024-11-12T20:53:38.234844Z INFO Daemon Daemon Create user account if not exists Nov 12 20:53:38.250364 waagent[1795]: 2024-11-12T20:53:38.235270Z INFO Daemon Daemon User core already exists, skip useradd Nov 12 20:53:38.250364 waagent[1795]: 2024-11-12T20:53:38.236757Z INFO Daemon Daemon Configure sudoer Nov 12 20:53:38.250364 waagent[1795]: 2024-11-12T20:53:38.237903Z INFO Daemon Daemon Configure sshd Nov 12 20:53:38.250364 waagent[1795]: 2024-11-12T20:53:38.239072Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Nov 12 20:53:38.250364 waagent[1795]: 2024-11-12T20:53:38.239715Z INFO Daemon Daemon Deploy ssh public key. Nov 12 20:53:38.251072 systemd-networkd[1480]: eth0: DHCPv6 lease lost Nov 12 20:53:38.287100 systemd-networkd[1480]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Nov 12 20:53:39.336140 waagent[1795]: 2024-11-12T20:53:39.336040Z INFO Daemon Daemon Provisioning complete Nov 12 20:53:39.347831 waagent[1795]: 2024-11-12T20:53:39.347749Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Nov 12 20:53:39.354376 waagent[1795]: 2024-11-12T20:53:39.348640Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Nov 12 20:53:39.354376 waagent[1795]: 2024-11-12T20:53:39.349444Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Nov 12 20:53:39.473166 waagent[1881]: 2024-11-12T20:53:39.473060Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Nov 12 20:53:39.473564 waagent[1881]: 2024-11-12T20:53:39.473231Z INFO ExtHandler ExtHandler OS: flatcar 4081.2.0 Nov 12 20:53:39.473564 waagent[1881]: 2024-11-12T20:53:39.473316Z INFO ExtHandler ExtHandler Python: 3.11.9 Nov 12 20:53:39.507584 waagent[1881]: 2024-11-12T20:53:39.507491Z INFO ExtHandler ExtHandler Distro: flatcar-4081.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Nov 12 20:53:39.507797 waagent[1881]: 2024-11-12T20:53:39.507749Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:53:39.507887 waagent[1881]: 2024-11-12T20:53:39.507846Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:53:39.515199 waagent[1881]: 2024-11-12T20:53:39.515131Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Nov 12 20:53:39.520661 waagent[1881]: 2024-11-12T20:53:39.520606Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.159 Nov 12 20:53:39.521152 waagent[1881]: 2024-11-12T20:53:39.521096Z INFO ExtHandler Nov 12 20:53:39.521227 waagent[1881]: 2024-11-12T20:53:39.521192Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 112689c7-fc19-451e-b7ed-c2986520bc8a eTag: 2567457109626726097 source: Fabric] Nov 12 20:53:39.521547 waagent[1881]: 2024-11-12T20:53:39.521496Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Nov 12 20:53:39.522118 waagent[1881]: 2024-11-12T20:53:39.522063Z INFO ExtHandler Nov 12 20:53:39.522191 waagent[1881]: 2024-11-12T20:53:39.522148Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Nov 12 20:53:39.525434 waagent[1881]: 2024-11-12T20:53:39.525393Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Nov 12 20:53:39.620177 waagent[1881]: 2024-11-12T20:53:39.620034Z INFO ExtHandler Downloaded certificate {'thumbprint': '35328F014D16FD07A5CDF3330A05689B70A7295D', 'hasPrivateKey': False} Nov 12 20:53:39.620581 waagent[1881]: 2024-11-12T20:53:39.620524Z INFO ExtHandler Downloaded certificate {'thumbprint': '0084BA97CA7AFA5187FC7EC0F0B6EBC0FC488211', 'hasPrivateKey': True} Nov 12 20:53:39.621049 waagent[1881]: 2024-11-12T20:53:39.620993Z INFO ExtHandler Fetch goal state completed Nov 12 20:53:39.635476 waagent[1881]: 2024-11-12T20:53:39.635411Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1881 Nov 12 20:53:39.635629 waagent[1881]: 2024-11-12T20:53:39.635578Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Nov 12 20:53:39.637206 waagent[1881]: 2024-11-12T20:53:39.637153Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4081.2.0', '', 'Flatcar Container Linux by Kinvolk'] Nov 12 20:53:39.637585 waagent[1881]: 2024-11-12T20:53:39.637536Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Nov 12 20:53:39.675942 waagent[1881]: 2024-11-12T20:53:39.675883Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Nov 12 20:53:39.676247 waagent[1881]: 2024-11-12T20:53:39.676191Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Nov 12 20:53:39.682803 waagent[1881]: 2024-11-12T20:53:39.682757Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Nov 12 20:53:39.689615 systemd[1]: Reloading requested from client PID 1896 ('systemctl') (unit waagent.service)... Nov 12 20:53:39.689632 systemd[1]: Reloading... Nov 12 20:53:39.771989 zram_generator::config[1929]: No configuration found. Nov 12 20:53:39.899060 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:39.978660 systemd[1]: Reloading finished in 288 ms. Nov 12 20:53:40.007983 waagent[1881]: 2024-11-12T20:53:40.005519Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Nov 12 20:53:40.014210 systemd[1]: Reloading requested from client PID 1987 ('systemctl') (unit waagent.service)... Nov 12 20:53:40.014225 systemd[1]: Reloading... Nov 12 20:53:40.112027 zram_generator::config[2021]: No configuration found. Nov 12 20:53:40.228112 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:53:40.307624 systemd[1]: Reloading finished in 293 ms. Nov 12 20:53:40.332985 waagent[1881]: 2024-11-12T20:53:40.332581Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Nov 12 20:53:40.332985 waagent[1881]: 2024-11-12T20:53:40.332806Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Nov 12 20:53:41.315458 waagent[1881]: 2024-11-12T20:53:41.315364Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Nov 12 20:53:41.316153 waagent[1881]: 2024-11-12T20:53:41.316087Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Nov 12 20:53:41.316948 waagent[1881]: 2024-11-12T20:53:41.316887Z INFO ExtHandler ExtHandler Starting env monitor service. Nov 12 20:53:41.317458 waagent[1881]: 2024-11-12T20:53:41.317403Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Nov 12 20:53:41.317588 waagent[1881]: 2024-11-12T20:53:41.317541Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:53:41.317659 waagent[1881]: 2024-11-12T20:53:41.317629Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Nov 12 20:53:41.317755 waagent[1881]: 2024-11-12T20:53:41.317718Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:53:41.318054 waagent[1881]: 2024-11-12T20:53:41.317998Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Nov 12 20:53:41.318730 waagent[1881]: 2024-11-12T20:53:41.318670Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Nov 12 20:53:41.318854 waagent[1881]: 2024-11-12T20:53:41.318797Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Nov 12 20:53:41.318924 waagent[1881]: 2024-11-12T20:53:41.318885Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Nov 12 20:53:41.319617 waagent[1881]: 2024-11-12T20:53:41.319530Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Nov 12 20:53:41.319943 waagent[1881]: 2024-11-12T20:53:41.319866Z INFO EnvHandler ExtHandler Configure routes Nov 12 20:53:41.320083 waagent[1881]: 2024-11-12T20:53:41.320039Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Nov 12 20:53:41.320083 waagent[1881]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Nov 12 20:53:41.320083 waagent[1881]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Nov 12 20:53:41.320083 waagent[1881]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Nov 12 20:53:41.320083 waagent[1881]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:53:41.320083 waagent[1881]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:53:41.320083 waagent[1881]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Nov 12 20:53:41.320332 waagent[1881]: 2024-11-12T20:53:41.320146Z INFO EnvHandler ExtHandler Gateway:None Nov 12 20:53:41.320332 waagent[1881]: 2024-11-12T20:53:41.320220Z INFO EnvHandler ExtHandler Routes:None Nov 12 20:53:41.320628 waagent[1881]: 2024-11-12T20:53:41.320494Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Nov 12 20:53:41.320897 waagent[1881]: 2024-11-12T20:53:41.320856Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Nov 12 20:53:41.328164 waagent[1881]: 2024-11-12T20:53:41.328108Z INFO ExtHandler ExtHandler Nov 12 20:53:41.328257 waagent[1881]: 2024-11-12T20:53:41.328215Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 2f0ade38-3f08-4df3-ae5c-02b52c6581b6 correlation 772f3a2f-49bb-4614-bad7-1de6eae972ee created: 2024-11-12T20:52:34.822578Z] Nov 12 20:53:41.328616 waagent[1881]: 2024-11-12T20:53:41.328569Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Nov 12 20:53:41.330981 waagent[1881]: 2024-11-12T20:53:41.329229Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Nov 12 20:53:41.362581 waagent[1881]: 2024-11-12T20:53:41.362484Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 2A822390-F8B3-41CC-8AB9-F76466885857;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Nov 12 20:53:41.376278 waagent[1881]: 2024-11-12T20:53:41.375801Z INFO MonitorHandler ExtHandler Network interfaces: Nov 12 20:53:41.376278 waagent[1881]: Executing ['ip', '-a', '-o', 'link']: Nov 12 20:53:41.376278 waagent[1881]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Nov 12 20:53:41.376278 waagent[1881]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b2:9b:b2 brd ff:ff:ff:ff:ff:ff Nov 12 20:53:41.376278 waagent[1881]: 3: enP58517s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b2:9b:b2 brd ff:ff:ff:ff:ff:ff\ altname enP58517p0s2 Nov 12 20:53:41.376278 waagent[1881]: Executing ['ip', '-4', '-a', '-o', 'address']: Nov 12 20:53:41.376278 waagent[1881]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Nov 12 20:53:41.376278 waagent[1881]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Nov 12 20:53:41.376278 waagent[1881]: Executing ['ip', '-6', '-a', '-o', 'address']: Nov 12 20:53:41.376278 waagent[1881]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Nov 12 20:53:41.376278 waagent[1881]: 2: eth0 inet6 fe80::20d:3aff:feb2:9bb2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 12 20:53:41.376278 waagent[1881]: 3: enP58517s1 inet6 fe80::20d:3aff:feb2:9bb2/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Nov 12 20:53:41.468064 waagent[1881]: 2024-11-12T20:53:41.467939Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Nov 12 20:53:41.468064 waagent[1881]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:53:41.468064 waagent[1881]: pkts bytes target prot opt in out source destination Nov 12 20:53:41.468064 waagent[1881]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:53:41.468064 waagent[1881]: pkts bytes target prot opt in out source destination Nov 12 20:53:41.468064 waagent[1881]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:53:41.468064 waagent[1881]: pkts bytes target prot opt in out source destination Nov 12 20:53:41.468064 waagent[1881]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 12 20:53:41.468064 waagent[1881]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 12 20:53:41.468064 waagent[1881]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 12 20:53:41.471249 waagent[1881]: 2024-11-12T20:53:41.471189Z INFO EnvHandler ExtHandler Current Firewall rules: Nov 12 20:53:41.471249 waagent[1881]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:53:41.471249 waagent[1881]: pkts bytes target prot opt in out source destination Nov 12 20:53:41.471249 waagent[1881]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:53:41.471249 waagent[1881]: pkts bytes target prot opt in out source destination Nov 12 20:53:41.471249 waagent[1881]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Nov 12 20:53:41.471249 waagent[1881]: pkts bytes target prot opt in out source destination Nov 12 20:53:41.471249 waagent[1881]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Nov 12 20:53:41.471249 waagent[1881]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Nov 12 20:53:41.471249 waagent[1881]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Nov 12 20:53:41.471631 waagent[1881]: 2024-11-12T20:53:41.471484Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Nov 12 20:53:47.808380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Nov 12 20:53:47.821187 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:47.916045 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:47.920546 (kubelet)[2119]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:48.521167 kubelet[2119]: E1112 20:53:48.521109 2119 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:48.525163 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:48.525366 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:53:58.558279 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Nov 12 20:53:58.564202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:53:58.654854 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:53:58.659681 (kubelet)[2135]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:53:58.920662 chronyd[1663]: Selected source PHC0 Nov 12 20:53:59.206447 kubelet[2135]: E1112 20:53:59.206322 2135 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:53:59.209106 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:53:59.209311 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:09.308531 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Nov 12 20:54:09.322260 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:09.414897 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:09.425284 (kubelet)[2151]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:09.999294 kubelet[2151]: E1112 20:54:09.999224 2151 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:10.001710 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:10.001923 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:12.175534 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Nov 12 20:54:12.181276 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:38034.service - OpenSSH per-connection server daemon (10.200.16.10:38034). Nov 12 20:54:12.851699 sshd[2160]: Accepted publickey for core from 10.200.16.10 port 38034 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:12.853299 sshd[2160]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:12.858505 systemd-logind[1670]: New session 3 of user core. Nov 12 20:54:12.865147 systemd[1]: Started session-3.scope - Session 3 of User core. Nov 12 20:54:13.400113 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:38036.service - OpenSSH per-connection server daemon (10.200.16.10:38036). Nov 12 20:54:14.025583 sshd[2165]: Accepted publickey for core from 10.200.16.10 port 38036 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:14.027156 sshd[2165]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:14.031612 systemd-logind[1670]: New session 4 of user core. Nov 12 20:54:14.034128 systemd[1]: Started session-4.scope - Session 4 of User core. Nov 12 20:54:14.470272 sshd[2165]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:14.474909 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:38036.service: Deactivated successfully. Nov 12 20:54:14.477215 systemd[1]: session-4.scope: Deactivated successfully. Nov 12 20:54:14.478132 systemd-logind[1670]: Session 4 logged out. Waiting for processes to exit. Nov 12 20:54:14.479325 systemd-logind[1670]: Removed session 4. Nov 12 20:54:14.581279 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:38052.service - OpenSSH per-connection server daemon (10.200.16.10:38052). Nov 12 20:54:15.208736 sshd[2172]: Accepted publickey for core from 10.200.16.10 port 38052 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:15.210580 sshd[2172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:15.216228 systemd-logind[1670]: New session 5 of user core. Nov 12 20:54:15.225137 systemd[1]: Started session-5.scope - Session 5 of User core. Nov 12 20:54:15.652521 sshd[2172]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:15.656048 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:38052.service: Deactivated successfully. Nov 12 20:54:15.658044 systemd[1]: session-5.scope: Deactivated successfully. Nov 12 20:54:15.659462 systemd-logind[1670]: Session 5 logged out. Waiting for processes to exit. Nov 12 20:54:15.660509 systemd-logind[1670]: Removed session 5. Nov 12 20:54:15.762923 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:38058.service - OpenSSH per-connection server daemon (10.200.16.10:38058). Nov 12 20:54:16.389009 sshd[2179]: Accepted publickey for core from 10.200.16.10 port 38058 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:16.390602 sshd[2179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:16.395325 systemd-logind[1670]: New session 6 of user core. Nov 12 20:54:16.405101 systemd[1]: Started session-6.scope - Session 6 of User core. Nov 12 20:54:16.836066 sshd[2179]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:16.840616 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:38058.service: Deactivated successfully. Nov 12 20:54:16.842689 systemd[1]: session-6.scope: Deactivated successfully. Nov 12 20:54:16.843361 systemd-logind[1670]: Session 6 logged out. Waiting for processes to exit. Nov 12 20:54:16.844454 systemd-logind[1670]: Removed session 6. Nov 12 20:54:16.947154 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:38062.service - OpenSSH per-connection server daemon (10.200.16.10:38062). Nov 12 20:54:17.575902 sshd[2186]: Accepted publickey for core from 10.200.16.10 port 38062 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:17.577699 sshd[2186]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:17.583340 systemd-logind[1670]: New session 7 of user core. Nov 12 20:54:17.593134 systemd[1]: Started session-7.scope - Session 7 of User core. Nov 12 20:54:18.118183 sudo[2189]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Nov 12 20:54:18.118557 sudo[2189]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:18.143417 sudo[2189]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:18.245367 sshd[2186]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:18.249356 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:38062.service: Deactivated successfully. Nov 12 20:54:18.251527 systemd[1]: session-7.scope: Deactivated successfully. Nov 12 20:54:18.253152 systemd-logind[1670]: Session 7 logged out. Waiting for processes to exit. Nov 12 20:54:18.254305 systemd-logind[1670]: Removed session 7. Nov 12 20:54:18.356590 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:59200.service - OpenSSH per-connection server daemon (10.200.16.10:59200). Nov 12 20:54:18.991714 sshd[2194]: Accepted publickey for core from 10.200.16.10 port 59200 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:18.993621 sshd[2194]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:18.998426 systemd-logind[1670]: New session 8 of user core. Nov 12 20:54:19.004126 systemd[1]: Started session-8.scope - Session 8 of User core. Nov 12 20:54:19.286316 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Nov 12 20:54:19.339059 sudo[2198]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Nov 12 20:54:19.339442 sudo[2198]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:19.342803 sudo[2198]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:19.349272 sudo[2197]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Nov 12 20:54:19.349621 sudo[2197]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:19.362317 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:19.365416 auditctl[2201]: No rules Nov 12 20:54:19.365774 systemd[1]: audit-rules.service: Deactivated successfully. Nov 12 20:54:19.365989 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:19.369001 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Nov 12 20:54:19.399488 augenrules[2219]: No rules Nov 12 20:54:19.401056 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Nov 12 20:54:19.402764 sudo[2197]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:19.504590 sshd[2194]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:19.508309 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:59200.service: Deactivated successfully. Nov 12 20:54:19.510552 systemd[1]: session-8.scope: Deactivated successfully. Nov 12 20:54:19.511939 systemd-logind[1670]: Session 8 logged out. Waiting for processes to exit. Nov 12 20:54:19.513097 systemd-logind[1670]: Removed session 8. Nov 12 20:54:19.614940 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:59212.service - OpenSSH per-connection server daemon (10.200.16.10:59212). Nov 12 20:54:20.058374 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Nov 12 20:54:20.065181 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:20.162238 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:20.167046 (kubelet)[2237]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:20.210479 kubelet[2237]: E1112 20:54:20.210427 2237 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:20.212840 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:20.213068 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:20.245336 sshd[2227]: Accepted publickey for core from 10.200.16.10 port 59212 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:54:20.247157 sshd[2227]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:54:20.252929 systemd-logind[1670]: New session 9 of user core. Nov 12 20:54:20.263148 systemd[1]: Started session-9.scope - Session 9 of User core. Nov 12 20:54:20.591259 sudo[2245]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Nov 12 20:54:20.591621 sudo[2245]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Nov 12 20:54:20.891999 update_engine[1673]: I20241112 20:54:20.890938 1673 update_attempter.cc:509] Updating boot flags... Nov 12 20:54:20.999001 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2262) Nov 12 20:54:21.120987 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 37 scanned by (udev-worker) (2265) Nov 12 20:54:22.241370 (dockerd)[2326]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Nov 12 20:54:22.241370 systemd[1]: Starting docker.service - Docker Application Container Engine... Nov 12 20:54:23.591410 dockerd[2326]: time="2024-11-12T20:54:23.591348092Z" level=info msg="Starting up" Nov 12 20:54:24.084873 dockerd[2326]: time="2024-11-12T20:54:24.084818975Z" level=info msg="Loading containers: start." Nov 12 20:54:24.252102 kernel: Initializing XFRM netlink socket Nov 12 20:54:24.412837 systemd-networkd[1480]: docker0: Link UP Nov 12 20:54:24.436152 dockerd[2326]: time="2024-11-12T20:54:24.436108722Z" level=info msg="Loading containers: done." Nov 12 20:54:24.491378 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1140796331-merged.mount: Deactivated successfully. Nov 12 20:54:24.505639 dockerd[2326]: time="2024-11-12T20:54:24.505587524Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Nov 12 20:54:24.505834 dockerd[2326]: time="2024-11-12T20:54:24.505718825Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Nov 12 20:54:24.505907 dockerd[2326]: time="2024-11-12T20:54:24.505864127Z" level=info msg="Daemon has completed initialization" Nov 12 20:54:24.562690 dockerd[2326]: time="2024-11-12T20:54:24.562391097Z" level=info msg="API listen on /run/docker.sock" Nov 12 20:54:24.562888 systemd[1]: Started docker.service - Docker Application Container Engine. Nov 12 20:54:26.697499 containerd[1692]: time="2024-11-12T20:54:26.697458556Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\"" Nov 12 20:54:27.345819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3907718998.mount: Deactivated successfully. Nov 12 20:54:29.017854 containerd[1692]: time="2024-11-12T20:54:29.017793784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:29.020908 containerd[1692]: time="2024-11-12T20:54:29.020845115Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.6: active requests=0, bytes read=32676451" Nov 12 20:54:29.025870 containerd[1692]: time="2024-11-12T20:54:29.025813665Z" level=info msg="ImageCreate event name:\"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:29.034496 containerd[1692]: time="2024-11-12T20:54:29.034459753Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:29.035736 containerd[1692]: time="2024-11-12T20:54:29.035452263Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.6\" with image id \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:3a820898379831ecff7cf4ce4954bb7a6505988eefcef146fd1ee2f56a01cdbb\", size \"32673243\" in 2.337953807s" Nov 12 20:54:29.035736 containerd[1692]: time="2024-11-12T20:54:29.035497263Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.6\" returns image reference \"sha256:a247bfa6152e770cd36ef6fe2a8831429eb43da1fd506c30b12af93f032ee849\"" Nov 12 20:54:29.055909 containerd[1692]: time="2024-11-12T20:54:29.055867169Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\"" Nov 12 20:54:30.308457 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Nov 12 20:54:30.317373 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:30.461479 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:30.465048 (kubelet)[2534]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:30.992144 kubelet[2534]: E1112 20:54:30.992084 2534 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:30.994544 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:30.994753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:30.998424 containerd[1692]: time="2024-11-12T20:54:30.998381191Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:31.001216 containerd[1692]: time="2024-11-12T20:54:31.001164631Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.6: active requests=0, bytes read=29605804" Nov 12 20:54:31.005079 containerd[1692]: time="2024-11-12T20:54:31.005024386Z" level=info msg="ImageCreate event name:\"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:31.010827 containerd[1692]: time="2024-11-12T20:54:31.010771168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:31.011806 containerd[1692]: time="2024-11-12T20:54:31.011765282Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.6\" with image id \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a412c3cdf35d39c8d37748b457a486faae7c5f2ee1d1ba2059c709bc5534686\", size \"31051162\" in 1.955851513s" Nov 12 20:54:31.011887 containerd[1692]: time="2024-11-12T20:54:31.011811983Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.6\" returns image reference \"sha256:382949f9bfdd9da8bf555d18adac4eb0dba8264b7e3b5963e6a26ef8d412477c\"" Nov 12 20:54:31.034644 containerd[1692]: time="2024-11-12T20:54:31.034600009Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\"" Nov 12 20:54:32.280168 containerd[1692]: time="2024-11-12T20:54:32.280109427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:32.282683 containerd[1692]: time="2024-11-12T20:54:32.282613663Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.6: active requests=0, bytes read=17784252" Nov 12 20:54:32.288783 containerd[1692]: time="2024-11-12T20:54:32.288719650Z" level=info msg="ImageCreate event name:\"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:32.294662 containerd[1692]: time="2024-11-12T20:54:32.294597434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:32.295766 containerd[1692]: time="2024-11-12T20:54:32.295589048Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.6\" with image id \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:948395c284d82c985f2dc0d99b5b51b3ca85eba97003babbc73834e0ab91fa59\", size \"19229628\" in 1.260947738s" Nov 12 20:54:32.295766 containerd[1692]: time="2024-11-12T20:54:32.295630149Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.6\" returns image reference \"sha256:ad5858afd532223324ff223396490f5fd8228323963b424ad7868407bd4ef1fb\"" Nov 12 20:54:32.319116 containerd[1692]: time="2024-11-12T20:54:32.319053384Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\"" Nov 12 20:54:33.395696 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1910439123.mount: Deactivated successfully. Nov 12 20:54:33.853090 containerd[1692]: time="2024-11-12T20:54:33.853029028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:33.858046 containerd[1692]: time="2024-11-12T20:54:33.857985699Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.6: active requests=0, bytes read=29054632" Nov 12 20:54:33.860563 containerd[1692]: time="2024-11-12T20:54:33.860502835Z" level=info msg="ImageCreate event name:\"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:33.865304 containerd[1692]: time="2024-11-12T20:54:33.865231903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:33.866040 containerd[1692]: time="2024-11-12T20:54:33.865837812Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.6\" with image id \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\", repo tag \"registry.k8s.io/kube-proxy:v1.30.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:aaf790f611159ab21713affc2c5676f742c9b31db26dd2e61e46c4257dd11b76\", size \"29053643\" in 1.546738927s" Nov 12 20:54:33.866040 containerd[1692]: time="2024-11-12T20:54:33.865891312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.6\" returns image reference \"sha256:2cce8902ed3ccdc78ecdb02734bd9ba32e2c7b44fc221663cf9ece2a179ff6a6\"" Nov 12 20:54:33.889051 containerd[1692]: time="2024-11-12T20:54:33.889009143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Nov 12 20:54:34.495577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount311682254.mount: Deactivated successfully. Nov 12 20:54:35.677862 containerd[1692]: time="2024-11-12T20:54:35.677804133Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:35.680504 containerd[1692]: time="2024-11-12T20:54:35.680434270Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Nov 12 20:54:35.683885 containerd[1692]: time="2024-11-12T20:54:35.683829519Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:35.689148 containerd[1692]: time="2024-11-12T20:54:35.689094594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:35.690159 containerd[1692]: time="2024-11-12T20:54:35.690125409Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.801072365s" Nov 12 20:54:35.690616 containerd[1692]: time="2024-11-12T20:54:35.690272911Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Nov 12 20:54:35.712722 containerd[1692]: time="2024-11-12T20:54:35.712683832Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Nov 12 20:54:36.288662 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272800849.mount: Deactivated successfully. Nov 12 20:54:36.319804 containerd[1692]: time="2024-11-12T20:54:36.319747316Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:36.322380 containerd[1692]: time="2024-11-12T20:54:36.322308053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Nov 12 20:54:36.327904 containerd[1692]: time="2024-11-12T20:54:36.327844332Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:36.332428 containerd[1692]: time="2024-11-12T20:54:36.332367797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:36.333261 containerd[1692]: time="2024-11-12T20:54:36.333097407Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 620.375575ms" Nov 12 20:54:36.333261 containerd[1692]: time="2024-11-12T20:54:36.333139308Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Nov 12 20:54:36.357530 containerd[1692]: time="2024-11-12T20:54:36.357397355Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Nov 12 20:54:36.910197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2350773870.mount: Deactivated successfully. Nov 12 20:54:39.864928 containerd[1692]: time="2024-11-12T20:54:39.864862162Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:39.867050 containerd[1692]: time="2024-11-12T20:54:39.866979594Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" Nov 12 20:54:39.870381 containerd[1692]: time="2024-11-12T20:54:39.870308945Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:39.876451 containerd[1692]: time="2024-11-12T20:54:39.876373638Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:54:39.877630 containerd[1692]: time="2024-11-12T20:54:39.877468954Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 3.520023899s" Nov 12 20:54:39.877630 containerd[1692]: time="2024-11-12T20:54:39.877514055Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Nov 12 20:54:41.058376 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Nov 12 20:54:41.068091 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:41.236684 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:41.253299 (kubelet)[2733]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Nov 12 20:54:41.314422 kubelet[2733]: E1112 20:54:41.314264 2733 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Nov 12 20:54:41.317396 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Nov 12 20:54:41.317715 systemd[1]: kubelet.service: Failed with result 'exit-code'. Nov 12 20:54:42.522691 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:42.527254 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:42.551910 systemd[1]: Reloading requested from client PID 2748 ('systemctl') (unit session-9.scope)... Nov 12 20:54:42.551927 systemd[1]: Reloading... Nov 12 20:54:42.663988 zram_generator::config[2785]: No configuration found. Nov 12 20:54:42.788588 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:42.877194 systemd[1]: Reloading finished in 324 ms. Nov 12 20:54:42.990158 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Nov 12 20:54:42.990319 systemd[1]: kubelet.service: Failed with result 'signal'. Nov 12 20:54:42.990715 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:42.995298 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:44.011975 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:44.027289 (kubelet)[2854]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:54:44.172549 kubelet[2854]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:44.172549 kubelet[2854]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:54:44.172549 kubelet[2854]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:44.173124 kubelet[2854]: I1112 20:54:44.172624 2854 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:54:44.586419 kubelet[2854]: I1112 20:54:44.586376 2854 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 20:54:44.586419 kubelet[2854]: I1112 20:54:44.586408 2854 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:54:44.586697 kubelet[2854]: I1112 20:54:44.586674 2854 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 20:54:45.516689 kubelet[2854]: I1112 20:54:45.516219 2854 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:45.518201 kubelet[2854]: E1112 20:54:45.517975 2854 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.530583 kubelet[2854]: I1112 20:54:45.530541 2854 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:54:45.531912 kubelet[2854]: I1112 20:54:45.531848 2854 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:54:45.532139 kubelet[2854]: I1112 20:54:45.531904 2854 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.0-a-c73ec1ae7a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:54:45.532296 kubelet[2854]: I1112 20:54:45.532157 2854 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:54:45.532296 kubelet[2854]: I1112 20:54:45.532172 2854 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:54:45.532374 kubelet[2854]: I1112 20:54:45.532335 2854 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:45.533264 kubelet[2854]: I1112 20:54:45.533243 2854 kubelet.go:400] "Attempting to sync node with API server" Nov 12 20:54:45.533264 kubelet[2854]: I1112 20:54:45.533266 2854 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:54:45.533513 kubelet[2854]: I1112 20:54:45.533293 2854 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:54:45.533513 kubelet[2854]: I1112 20:54:45.533311 2854 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:54:45.539984 kubelet[2854]: W1112 20:54:45.539502 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.539984 kubelet[2854]: E1112 20:54:45.539574 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.539984 kubelet[2854]: W1112 20:54:45.539868 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-c73ec1ae7a&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.539984 kubelet[2854]: E1112 20:54:45.539912 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-c73ec1ae7a&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.540924 kubelet[2854]: I1112 20:54:45.540705 2854 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:54:45.542314 kubelet[2854]: I1112 20:54:45.542283 2854 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:54:45.542411 kubelet[2854]: W1112 20:54:45.542356 2854 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Nov 12 20:54:45.543323 kubelet[2854]: I1112 20:54:45.543070 2854 server.go:1264] "Started kubelet" Nov 12 20:54:45.547737 kubelet[2854]: I1112 20:54:45.547549 2854 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:54:45.552983 kubelet[2854]: I1112 20:54:45.551183 2854 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:54:45.552983 kubelet[2854]: E1112 20:54:45.551523 2854 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.39:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.39:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4081.2.0-a-c73ec1ae7a.180753f326f5b8a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-c73ec1ae7a,UID:ci-4081.2.0-a-c73ec1ae7a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-c73ec1ae7a,},FirstTimestamp:2024-11-12 20:54:45.543041185 +0000 UTC m=+1.512160123,LastTimestamp:2024-11-12 20:54:45.543041185 +0000 UTC m=+1.512160123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-c73ec1ae7a,}" Nov 12 20:54:45.554051 kubelet[2854]: I1112 20:54:45.554030 2854 server.go:455] "Adding debug handlers to kubelet server" Nov 12 20:54:45.554268 kubelet[2854]: I1112 20:54:45.554210 2854 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:54:45.554591 kubelet[2854]: I1112 20:54:45.554568 2854 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:54:45.556686 kubelet[2854]: I1112 20:54:45.556664 2854 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:54:45.558227 kubelet[2854]: E1112 20:54:45.557342 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-c73ec1ae7a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Nov 12 20:54:45.558227 kubelet[2854]: I1112 20:54:45.557397 2854 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 20:54:45.558227 kubelet[2854]: W1112 20:54:45.557777 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.558227 kubelet[2854]: E1112 20:54:45.557831 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.558851 kubelet[2854]: I1112 20:54:45.558827 2854 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:54:45.559274 kubelet[2854]: I1112 20:54:45.559250 2854 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:54:45.559378 kubelet[2854]: I1112 20:54:45.559354 2854 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:54:45.560572 kubelet[2854]: I1112 20:54:45.560548 2854 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:54:45.599443 kubelet[2854]: I1112 20:54:45.599412 2854 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:54:45.599443 kubelet[2854]: I1112 20:54:45.599435 2854 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:54:45.599634 kubelet[2854]: I1112 20:54:45.599461 2854 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:45.607686 kubelet[2854]: I1112 20:54:45.607490 2854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:54:45.607686 kubelet[2854]: I1112 20:54:45.607549 2854 policy_none.go:49] "None policy: Start" Nov 12 20:54:45.608872 kubelet[2854]: I1112 20:54:45.608841 2854 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:54:45.608872 kubelet[2854]: I1112 20:54:45.608877 2854 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:54:45.609031 kubelet[2854]: I1112 20:54:45.608903 2854 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 20:54:45.609031 kubelet[2854]: E1112 20:54:45.608946 2854 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:54:45.613702 kubelet[2854]: W1112 20:54:45.613670 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.613842 kubelet[2854]: E1112 20:54:45.613715 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:45.614388 kubelet[2854]: I1112 20:54:45.614246 2854 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:54:45.614388 kubelet[2854]: I1112 20:54:45.614274 2854 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:54:45.624938 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Nov 12 20:54:45.632940 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Nov 12 20:54:45.636184 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Nov 12 20:54:45.646252 kubelet[2854]: I1112 20:54:45.645811 2854 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:54:45.646252 kubelet[2854]: I1112 20:54:45.646074 2854 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:54:45.646712 kubelet[2854]: I1112 20:54:45.646691 2854 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:54:45.648769 kubelet[2854]: E1112 20:54:45.648737 2854 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4081.2.0-a-c73ec1ae7a\" not found" Nov 12 20:54:45.659634 kubelet[2854]: I1112 20:54:45.659592 2854 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.660059 kubelet[2854]: E1112 20:54:45.660015 2854 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.710000 kubelet[2854]: I1112 20:54:45.709867 2854 topology_manager.go:215] "Topology Admit Handler" podUID="9b6b8d15bbbed81809402f2623377989" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.712113 kubelet[2854]: I1112 20:54:45.711977 2854 topology_manager.go:215] "Topology Admit Handler" podUID="cd54abfcec0ea9ab88c15ca3377fb673" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.713685 kubelet[2854]: I1112 20:54:45.713498 2854 topology_manager.go:215] "Topology Admit Handler" podUID="322f110ecebc388498fb85862ee1ab72" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.720740 systemd[1]: Created slice kubepods-burstable-pod9b6b8d15bbbed81809402f2623377989.slice - libcontainer container kubepods-burstable-pod9b6b8d15bbbed81809402f2623377989.slice. Nov 12 20:54:45.743930 systemd[1]: Created slice kubepods-burstable-podcd54abfcec0ea9ab88c15ca3377fb673.slice - libcontainer container kubepods-burstable-podcd54abfcec0ea9ab88c15ca3377fb673.slice. Nov 12 20:54:45.748817 systemd[1]: Created slice kubepods-burstable-pod322f110ecebc388498fb85862ee1ab72.slice - libcontainer container kubepods-burstable-pod322f110ecebc388498fb85862ee1ab72.slice. Nov 12 20:54:45.757766 kubelet[2854]: E1112 20:54:45.757719 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-c73ec1ae7a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Nov 12 20:54:45.760020 kubelet[2854]: I1112 20:54:45.759991 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/322f110ecebc388498fb85862ee1ab72-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"322f110ecebc388498fb85862ee1ab72\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760189 kubelet[2854]: I1112 20:54:45.760031 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/322f110ecebc388498fb85862ee1ab72-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"322f110ecebc388498fb85862ee1ab72\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760189 kubelet[2854]: I1112 20:54:45.760059 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760189 kubelet[2854]: I1112 20:54:45.760081 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd54abfcec0ea9ab88c15ca3377fb673-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"cd54abfcec0ea9ab88c15ca3377fb673\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760189 kubelet[2854]: I1112 20:54:45.760107 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/322f110ecebc388498fb85862ee1ab72-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"322f110ecebc388498fb85862ee1ab72\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760189 kubelet[2854]: I1112 20:54:45.760129 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760354 kubelet[2854]: I1112 20:54:45.760149 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760354 kubelet[2854]: I1112 20:54:45.760170 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.760354 kubelet[2854]: I1112 20:54:45.760192 2854 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.862524 kubelet[2854]: I1112 20:54:45.862486 2854 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:45.862933 kubelet[2854]: E1112 20:54:45.862892 2854 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:46.041981 containerd[1692]: time="2024-11-12T20:54:46.041905565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a,Uid:9b6b8d15bbbed81809402f2623377989,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:46.047633 containerd[1692]: time="2024-11-12T20:54:46.047588030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-c73ec1ae7a,Uid:cd54abfcec0ea9ab88c15ca3377fb673,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:46.051391 containerd[1692]: time="2024-11-12T20:54:46.051260573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-c73ec1ae7a,Uid:322f110ecebc388498fb85862ee1ab72,Namespace:kube-system,Attempt:0,}" Nov 12 20:54:46.158590 kubelet[2854]: E1112 20:54:46.158433 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-c73ec1ae7a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Nov 12 20:54:46.264732 kubelet[2854]: I1112 20:54:46.264697 2854 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:46.265098 kubelet[2854]: E1112 20:54:46.265066 2854 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:46.517510 kubelet[2854]: W1112 20:54:46.517331 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:46.517510 kubelet[2854]: E1112 20:54:46.517405 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:46.664576 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323953039.mount: Deactivated successfully. Nov 12 20:54:46.702032 containerd[1692]: time="2024-11-12T20:54:46.701942012Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:46.705449 containerd[1692]: time="2024-11-12T20:54:46.705381752Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Nov 12 20:54:46.708878 containerd[1692]: time="2024-11-12T20:54:46.708831492Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:46.711716 containerd[1692]: time="2024-11-12T20:54:46.711671025Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:46.714557 containerd[1692]: time="2024-11-12T20:54:46.714508358Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:54:46.721644 containerd[1692]: time="2024-11-12T20:54:46.721593340Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:46.726869 containerd[1692]: time="2024-11-12T20:54:46.726794100Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Nov 12 20:54:46.732767 containerd[1692]: time="2024-11-12T20:54:46.732708169Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Nov 12 20:54:46.733794 containerd[1692]: time="2024-11-12T20:54:46.733509278Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 682.161304ms" Nov 12 20:54:46.735659 containerd[1692]: time="2024-11-12T20:54:46.735620302Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.947971ms" Nov 12 20:54:46.736232 containerd[1692]: time="2024-11-12T20:54:46.736198509Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.176043ms" Nov 12 20:54:46.831840 kubelet[2854]: W1112 20:54:46.831803 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:46.831840 kubelet[2854]: E1112 20:54:46.831846 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:46.959919 kubelet[2854]: E1112 20:54:46.959858 2854 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4081.2.0-a-c73ec1ae7a?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Nov 12 20:54:47.047371 kubelet[2854]: W1112 20:54:47.047322 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:47.047371 kubelet[2854]: E1112 20:54:47.047374 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:47.067279 kubelet[2854]: I1112 20:54:47.067216 2854 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:47.067622 kubelet[2854]: E1112 20:54:47.067589 2854 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:47.078134 kubelet[2854]: W1112 20:54:47.078077 2854 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-c73ec1ae7a&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:47.078231 kubelet[2854]: E1112 20:54:47.078142 2854 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4081.2.0-a-c73ec1ae7a&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:47.361883 containerd[1692]: time="2024-11-12T20:54:47.361577980Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:47.361883 containerd[1692]: time="2024-11-12T20:54:47.361659782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:47.361883 containerd[1692]: time="2024-11-12T20:54:47.361681783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.361883 containerd[1692]: time="2024-11-12T20:54:47.361787785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.367977 containerd[1692]: time="2024-11-12T20:54:47.367470310Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:47.367977 containerd[1692]: time="2024-11-12T20:54:47.367655914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:47.368220 containerd[1692]: time="2024-11-12T20:54:47.367975821Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.369409 containerd[1692]: time="2024-11-12T20:54:47.369306950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.372609 containerd[1692]: time="2024-11-12T20:54:47.372329717Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:54:47.372609 containerd[1692]: time="2024-11-12T20:54:47.372395718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:54:47.372609 containerd[1692]: time="2024-11-12T20:54:47.372418419Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.372609 containerd[1692]: time="2024-11-12T20:54:47.372509321Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:54:47.400360 systemd[1]: Started cri-containerd-f727b59ea2fb35fcf3056b16d36e16bc1f23c2f51d1118b0d38d8ec528a5969f.scope - libcontainer container f727b59ea2fb35fcf3056b16d36e16bc1f23c2f51d1118b0d38d8ec528a5969f. Nov 12 20:54:47.406128 systemd[1]: Started cri-containerd-7210724559148e496bafcfccd1274c0213a4cc364895c43171f07d713da9cae4.scope - libcontainer container 7210724559148e496bafcfccd1274c0213a4cc364895c43171f07d713da9cae4. Nov 12 20:54:47.408720 systemd[1]: Started cri-containerd-e33ba529f559b60711aff95e34206992bc01275eb4c79f1e7e0f6bca7628396b.scope - libcontainer container e33ba529f559b60711aff95e34206992bc01275eb4c79f1e7e0f6bca7628396b. Nov 12 20:54:47.485076 containerd[1692]: time="2024-11-12T20:54:47.482543043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4081.2.0-a-c73ec1ae7a,Uid:cd54abfcec0ea9ab88c15ca3377fb673,Namespace:kube-system,Attempt:0,} returns sandbox id \"f727b59ea2fb35fcf3056b16d36e16bc1f23c2f51d1118b0d38d8ec528a5969f\"" Nov 12 20:54:47.490331 containerd[1692]: time="2024-11-12T20:54:47.490225612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a,Uid:9b6b8d15bbbed81809402f2623377989,Namespace:kube-system,Attempt:0,} returns sandbox id \"7210724559148e496bafcfccd1274c0213a4cc364895c43171f07d713da9cae4\"" Nov 12 20:54:47.495119 containerd[1692]: time="2024-11-12T20:54:47.495009817Z" level=info msg="CreateContainer within sandbox \"f727b59ea2fb35fcf3056b16d36e16bc1f23c2f51d1118b0d38d8ec528a5969f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Nov 12 20:54:47.498566 containerd[1692]: time="2024-11-12T20:54:47.498319490Z" level=info msg="CreateContainer within sandbox \"7210724559148e496bafcfccd1274c0213a4cc364895c43171f07d713da9cae4\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Nov 12 20:54:47.501437 containerd[1692]: time="2024-11-12T20:54:47.501390158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4081.2.0-a-c73ec1ae7a,Uid:322f110ecebc388498fb85862ee1ab72,Namespace:kube-system,Attempt:0,} returns sandbox id \"e33ba529f559b60711aff95e34206992bc01275eb4c79f1e7e0f6bca7628396b\"" Nov 12 20:54:47.504146 containerd[1692]: time="2024-11-12T20:54:47.504120318Z" level=info msg="CreateContainer within sandbox \"e33ba529f559b60711aff95e34206992bc01275eb4c79f1e7e0f6bca7628396b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Nov 12 20:54:47.553487 containerd[1692]: time="2024-11-12T20:54:47.553432103Z" level=info msg="CreateContainer within sandbox \"7210724559148e496bafcfccd1274c0213a4cc364895c43171f07d713da9cae4\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"98293d747740f63d2af526399b97a531e88cfba121e47f7f9a4d269db59a093e\"" Nov 12 20:54:47.554346 containerd[1692]: time="2024-11-12T20:54:47.554320023Z" level=info msg="StartContainer for \"98293d747740f63d2af526399b97a531e88cfba121e47f7f9a4d269db59a093e\"" Nov 12 20:54:47.556600 containerd[1692]: time="2024-11-12T20:54:47.556556572Z" level=info msg="CreateContainer within sandbox \"f727b59ea2fb35fcf3056b16d36e16bc1f23c2f51d1118b0d38d8ec528a5969f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9a29a84202ab4ebe6d0f97e782ce0e38d31eeb1507f03779f372f36d56b904e9\"" Nov 12 20:54:47.560010 containerd[1692]: time="2024-11-12T20:54:47.557906502Z" level=info msg="StartContainer for \"9a29a84202ab4ebe6d0f97e782ce0e38d31eeb1507f03779f372f36d56b904e9\"" Nov 12 20:54:47.584840 containerd[1692]: time="2024-11-12T20:54:47.584791393Z" level=info msg="CreateContainer within sandbox \"e33ba529f559b60711aff95e34206992bc01275eb4c79f1e7e0f6bca7628396b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1413bc1223f2ddf6960adb17e572dca64f37500168920e9422531f9763a8d76f\"" Nov 12 20:54:47.590137 containerd[1692]: time="2024-11-12T20:54:47.590101610Z" level=info msg="StartContainer for \"1413bc1223f2ddf6960adb17e572dca64f37500168920e9422531f9763a8d76f\"" Nov 12 20:54:47.598515 systemd[1]: Started cri-containerd-98293d747740f63d2af526399b97a531e88cfba121e47f7f9a4d269db59a093e.scope - libcontainer container 98293d747740f63d2af526399b97a531e88cfba121e47f7f9a4d269db59a093e. Nov 12 20:54:47.600980 systemd[1]: Started cri-containerd-9a29a84202ab4ebe6d0f97e782ce0e38d31eeb1507f03779f372f36d56b904e9.scope - libcontainer container 9a29a84202ab4ebe6d0f97e782ce0e38d31eeb1507f03779f372f36d56b904e9. Nov 12 20:54:47.633954 kubelet[2854]: E1112 20:54:47.633566 2854 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Nov 12 20:54:47.654450 systemd[1]: Started cri-containerd-1413bc1223f2ddf6960adb17e572dca64f37500168920e9422531f9763a8d76f.scope - libcontainer container 1413bc1223f2ddf6960adb17e572dca64f37500168920e9422531f9763a8d76f. Nov 12 20:54:47.712234 containerd[1692]: time="2024-11-12T20:54:47.712183297Z" level=info msg="StartContainer for \"9a29a84202ab4ebe6d0f97e782ce0e38d31eeb1507f03779f372f36d56b904e9\" returns successfully" Nov 12 20:54:47.712594 containerd[1692]: time="2024-11-12T20:54:47.712315300Z" level=info msg="StartContainer for \"98293d747740f63d2af526399b97a531e88cfba121e47f7f9a4d269db59a093e\" returns successfully" Nov 12 20:54:47.775287 containerd[1692]: time="2024-11-12T20:54:47.775230985Z" level=info msg="StartContainer for \"1413bc1223f2ddf6960adb17e572dca64f37500168920e9422531f9763a8d76f\" returns successfully" Nov 12 20:54:48.669996 kubelet[2854]: I1112 20:54:48.669949 2854 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:49.805286 kubelet[2854]: E1112 20:54:49.805230 2854 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4081.2.0-a-c73ec1ae7a\" not found" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:49.854126 kubelet[2854]: E1112 20:54:49.854015 2854 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4081.2.0-a-c73ec1ae7a.180753f326f5b8a1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4081.2.0-a-c73ec1ae7a,UID:ci-4081.2.0-a-c73ec1ae7a,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4081.2.0-a-c73ec1ae7a,},FirstTimestamp:2024-11-12 20:54:45.543041185 +0000 UTC m=+1.512160123,LastTimestamp:2024-11-12 20:54:45.543041185 +0000 UTC m=+1.512160123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4081.2.0-a-c73ec1ae7a,}" Nov 12 20:54:50.009383 kubelet[2854]: I1112 20:54:50.009341 2854 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:50.540713 kubelet[2854]: I1112 20:54:50.540420 2854 apiserver.go:52] "Watching apiserver" Nov 12 20:54:50.558366 kubelet[2854]: I1112 20:54:50.558330 2854 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 20:54:51.661882 kubelet[2854]: W1112 20:54:51.661291 2854 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:54:52.096744 systemd[1]: Reloading requested from client PID 3126 ('systemctl') (unit session-9.scope)... Nov 12 20:54:52.096762 systemd[1]: Reloading... Nov 12 20:54:52.209163 zram_generator::config[3162]: No configuration found. Nov 12 20:54:52.390779 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Nov 12 20:54:52.488149 systemd[1]: Reloading finished in 390 ms. Nov 12 20:54:52.535322 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:52.542446 systemd[1]: kubelet.service: Deactivated successfully. Nov 12 20:54:52.542770 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:52.548385 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Nov 12 20:54:52.649860 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Nov 12 20:54:52.659330 (kubelet)[3233]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Nov 12 20:54:52.717582 kubelet[3233]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:52.717582 kubelet[3233]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Nov 12 20:54:52.717582 kubelet[3233]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Nov 12 20:54:52.719041 kubelet[3233]: I1112 20:54:52.718164 3233 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Nov 12 20:54:52.725031 kubelet[3233]: I1112 20:54:52.723852 3233 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Nov 12 20:54:52.725031 kubelet[3233]: I1112 20:54:52.723880 3233 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Nov 12 20:54:52.725031 kubelet[3233]: I1112 20:54:52.724193 3233 server.go:927] "Client rotation is on, will bootstrap in background" Nov 12 20:54:52.726503 kubelet[3233]: I1112 20:54:52.726473 3233 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Nov 12 20:54:52.732690 kubelet[3233]: I1112 20:54:52.730686 3233 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Nov 12 20:54:52.739625 kubelet[3233]: I1112 20:54:52.739604 3233 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Nov 12 20:54:52.739856 kubelet[3233]: I1112 20:54:52.739812 3233 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Nov 12 20:54:52.740071 kubelet[3233]: I1112 20:54:52.739858 3233 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4081.2.0-a-c73ec1ae7a","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Nov 12 20:54:52.740251 kubelet[3233]: I1112 20:54:52.740089 3233 topology_manager.go:138] "Creating topology manager with none policy" Nov 12 20:54:52.740251 kubelet[3233]: I1112 20:54:52.740102 3233 container_manager_linux.go:301] "Creating device plugin manager" Nov 12 20:54:52.740251 kubelet[3233]: I1112 20:54:52.740177 3233 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:52.740395 kubelet[3233]: I1112 20:54:52.740299 3233 kubelet.go:400] "Attempting to sync node with API server" Nov 12 20:54:52.740395 kubelet[3233]: I1112 20:54:52.740316 3233 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Nov 12 20:54:52.740395 kubelet[3233]: I1112 20:54:52.740342 3233 kubelet.go:312] "Adding apiserver pod source" Nov 12 20:54:52.740395 kubelet[3233]: I1112 20:54:52.740359 3233 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Nov 12 20:54:52.741940 kubelet[3233]: I1112 20:54:52.741737 3233 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Nov 12 20:54:52.742136 kubelet[3233]: I1112 20:54:52.742120 3233 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Nov 12 20:54:52.742695 kubelet[3233]: I1112 20:54:52.742676 3233 server.go:1264] "Started kubelet" Nov 12 20:54:52.745470 kubelet[3233]: I1112 20:54:52.745448 3233 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Nov 12 20:54:52.751581 kubelet[3233]: I1112 20:54:52.751542 3233 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Nov 12 20:54:52.752757 kubelet[3233]: I1112 20:54:52.752734 3233 server.go:455] "Adding debug handlers to kubelet server" Nov 12 20:54:52.754884 kubelet[3233]: I1112 20:54:52.754833 3233 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Nov 12 20:54:52.755612 kubelet[3233]: I1112 20:54:52.755336 3233 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Nov 12 20:54:52.757090 kubelet[3233]: I1112 20:54:52.757075 3233 volume_manager.go:291] "Starting Kubelet Volume Manager" Nov 12 20:54:52.759852 kubelet[3233]: I1112 20:54:52.758869 3233 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Nov 12 20:54:52.759852 kubelet[3233]: I1112 20:54:52.759083 3233 reconciler.go:26] "Reconciler: start to sync state" Nov 12 20:54:52.761913 kubelet[3233]: I1112 20:54:52.761141 3233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Nov 12 20:54:52.762369 kubelet[3233]: I1112 20:54:52.762349 3233 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Nov 12 20:54:52.762466 kubelet[3233]: I1112 20:54:52.762381 3233 status_manager.go:217] "Starting to sync pod status with apiserver" Nov 12 20:54:52.762466 kubelet[3233]: I1112 20:54:52.762398 3233 kubelet.go:2337] "Starting kubelet main sync loop" Nov 12 20:54:52.762466 kubelet[3233]: E1112 20:54:52.762440 3233 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Nov 12 20:54:52.778010 kubelet[3233]: I1112 20:54:52.777951 3233 factory.go:221] Registration of the systemd container factory successfully Nov 12 20:54:52.778126 kubelet[3233]: I1112 20:54:52.778104 3233 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Nov 12 20:54:52.784417 kubelet[3233]: I1112 20:54:52.784389 3233 factory.go:221] Registration of the containerd container factory successfully Nov 12 20:54:52.826888 kubelet[3233]: I1112 20:54:52.826856 3233 cpu_manager.go:214] "Starting CPU manager" policy="none" Nov 12 20:54:52.826888 kubelet[3233]: I1112 20:54:52.826878 3233 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Nov 12 20:54:52.826888 kubelet[3233]: I1112 20:54:52.826900 3233 state_mem.go:36] "Initialized new in-memory state store" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.827083 3233 state_mem.go:88] "Updated default CPUSet" cpuSet="" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.827095 3233 state_mem.go:96] "Updated CPUSet assignments" assignments={} Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.827111 3233 policy_none.go:49] "None policy: Start" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.827717 3233 memory_manager.go:170] "Starting memorymanager" policy="None" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.827738 3233 state_mem.go:35] "Initializing new in-memory state store" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.827868 3233 state_mem.go:75] "Updated machine memory state" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.832369 3233 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Nov 12 20:54:53.194347 kubelet[3233]: E1112 20:54:52.862808 3233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.862820 3233 kubelet_node_status.go:73] "Attempting to register node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.194347 kubelet[3233]: I1112 20:54:52.876446 3233 kubelet_node_status.go:112] "Node was previously registered" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.194347 kubelet[3233]: E1112 20:54:53.063241 3233 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Nov 12 20:54:53.197474 kubelet[3233]: I1112 20:54:53.194933 3233 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Nov 12 20:54:53.197474 kubelet[3233]: I1112 20:54:53.195603 3233 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Nov 12 20:54:53.197474 kubelet[3233]: I1112 20:54:53.197207 3233 kubelet_node_status.go:76] "Successfully registered node" node="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.464280 kubelet[3233]: I1112 20:54:53.463731 3233 topology_manager.go:215] "Topology Admit Handler" podUID="322f110ecebc388498fb85862ee1ab72" podNamespace="kube-system" podName="kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.467760 kubelet[3233]: I1112 20:54:53.464547 3233 topology_manager.go:215] "Topology Admit Handler" podUID="9b6b8d15bbbed81809402f2623377989" podNamespace="kube-system" podName="kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.467760 kubelet[3233]: I1112 20:54:53.464695 3233 topology_manager.go:215] "Topology Admit Handler" podUID="cd54abfcec0ea9ab88c15ca3377fb673" podNamespace="kube-system" podName="kube-scheduler-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.475542 kubelet[3233]: W1112 20:54:53.475508 3233 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:54:53.481099 kubelet[3233]: W1112 20:54:53.481064 3233 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:54:53.481545 kubelet[3233]: W1112 20:54:53.481525 3233 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Nov 12 20:54:53.481632 kubelet[3233]: E1112 20:54:53.481587 3233 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" already exists" pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.564169 kubelet[3233]: I1112 20:54:53.563429 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/322f110ecebc388498fb85862ee1ab72-k8s-certs\") pod \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"322f110ecebc388498fb85862ee1ab72\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.564757 kubelet[3233]: I1112 20:54:53.564411 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-kubeconfig\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.564757 kubelet[3233]: I1112 20:54:53.564448 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/322f110ecebc388498fb85862ee1ab72-ca-certs\") pod \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"322f110ecebc388498fb85862ee1ab72\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.564757 kubelet[3233]: I1112 20:54:53.564476 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-ca-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.564757 kubelet[3233]: I1112 20:54:53.564526 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-flexvolume-dir\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.564757 kubelet[3233]: I1112 20:54:53.564551 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-k8s-certs\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.565029 kubelet[3233]: I1112 20:54:53.564622 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b6b8d15bbbed81809402f2623377989-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"9b6b8d15bbbed81809402f2623377989\") " pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.565029 kubelet[3233]: I1112 20:54:53.564647 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd54abfcec0ea9ab88c15ca3377fb673-kubeconfig\") pod \"kube-scheduler-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"cd54abfcec0ea9ab88c15ca3377fb673\") " pod="kube-system/kube-scheduler-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.565029 kubelet[3233]: I1112 20:54:53.564681 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/322f110ecebc388498fb85862ee1ab72-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4081.2.0-a-c73ec1ae7a\" (UID: \"322f110ecebc388498fb85862ee1ab72\") " pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:54:53.741905 kubelet[3233]: I1112 20:54:53.741786 3233 apiserver.go:52] "Watching apiserver" Nov 12 20:54:53.759638 kubelet[3233]: I1112 20:54:53.759609 3233 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Nov 12 20:54:53.802609 kubelet[3233]: I1112 20:54:53.802443 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4081.2.0-a-c73ec1ae7a" podStartSLOduration=0.802422957 podStartE2EDuration="802.422957ms" podCreationTimestamp="2024-11-12 20:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:53.802201355 +0000 UTC m=+1.137822752" watchObservedRunningTime="2024-11-12 20:54:53.802422957 +0000 UTC m=+1.138044454" Nov 12 20:54:53.822379 kubelet[3233]: I1112 20:54:53.821882 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4081.2.0-a-c73ec1ae7a" podStartSLOduration=2.8218612910000003 podStartE2EDuration="2.821861291s" podCreationTimestamp="2024-11-12 20:54:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:53.813136186 +0000 UTC m=+1.148757683" watchObservedRunningTime="2024-11-12 20:54:53.821861291 +0000 UTC m=+1.157482688" Nov 12 20:54:53.822379 kubelet[3233]: I1112 20:54:53.821999 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4081.2.0-a-c73ec1ae7a" podStartSLOduration=0.821990792 podStartE2EDuration="821.990792ms" podCreationTimestamp="2024-11-12 20:54:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:54:53.821660588 +0000 UTC m=+1.157282085" watchObservedRunningTime="2024-11-12 20:54:53.821990792 +0000 UTC m=+1.157612189" Nov 12 20:54:58.295342 sudo[2245]: pam_unix(sudo:session): session closed for user root Nov 12 20:54:58.397649 sshd[2227]: pam_unix(sshd:session): session closed for user core Nov 12 20:54:58.402142 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:59212.service: Deactivated successfully. Nov 12 20:54:58.404310 systemd[1]: session-9.scope: Deactivated successfully. Nov 12 20:54:58.404534 systemd[1]: session-9.scope: Consumed 4.457s CPU time, 190.3M memory peak, 0B memory swap peak. Nov 12 20:54:58.405156 systemd-logind[1670]: Session 9 logged out. Waiting for processes to exit. Nov 12 20:54:58.406211 systemd-logind[1670]: Removed session 9. Nov 12 20:55:05.426713 kubelet[3233]: I1112 20:55:05.426532 3233 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Nov 12 20:55:05.427435 containerd[1692]: time="2024-11-12T20:55:05.427254912Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Nov 12 20:55:05.429322 kubelet[3233]: I1112 20:55:05.427501 3233 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Nov 12 20:55:06.050169 kubelet[3233]: I1112 20:55:06.050120 3233 topology_manager.go:215] "Topology Admit Handler" podUID="4a417df9-8ce6-4b01-9620-c159219f9e3e" podNamespace="kube-system" podName="kube-proxy-mbkdn" Nov 12 20:55:06.072663 systemd[1]: Created slice kubepods-besteffort-pod4a417df9_8ce6_4b01_9620_c159219f9e3e.slice - libcontainer container kubepods-besteffort-pod4a417df9_8ce6_4b01_9620_c159219f9e3e.slice. Nov 12 20:55:06.153254 kubelet[3233]: I1112 20:55:06.153124 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4a417df9-8ce6-4b01-9620-c159219f9e3e-kube-proxy\") pod \"kube-proxy-mbkdn\" (UID: \"4a417df9-8ce6-4b01-9620-c159219f9e3e\") " pod="kube-system/kube-proxy-mbkdn" Nov 12 20:55:06.153254 kubelet[3233]: I1112 20:55:06.153182 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4a417df9-8ce6-4b01-9620-c159219f9e3e-lib-modules\") pod \"kube-proxy-mbkdn\" (UID: \"4a417df9-8ce6-4b01-9620-c159219f9e3e\") " pod="kube-system/kube-proxy-mbkdn" Nov 12 20:55:06.153254 kubelet[3233]: I1112 20:55:06.153223 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9cp\" (UniqueName: \"kubernetes.io/projected/4a417df9-8ce6-4b01-9620-c159219f9e3e-kube-api-access-qx9cp\") pod \"kube-proxy-mbkdn\" (UID: \"4a417df9-8ce6-4b01-9620-c159219f9e3e\") " pod="kube-system/kube-proxy-mbkdn" Nov 12 20:55:06.153254 kubelet[3233]: I1112 20:55:06.153256 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4a417df9-8ce6-4b01-9620-c159219f9e3e-xtables-lock\") pod \"kube-proxy-mbkdn\" (UID: \"4a417df9-8ce6-4b01-9620-c159219f9e3e\") " pod="kube-system/kube-proxy-mbkdn" Nov 12 20:55:06.259317 kubelet[3233]: E1112 20:55:06.259270 3233 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Nov 12 20:55:06.259317 kubelet[3233]: E1112 20:55:06.259306 3233 projected.go:200] Error preparing data for projected volume kube-api-access-qx9cp for pod kube-system/kube-proxy-mbkdn: configmap "kube-root-ca.crt" not found Nov 12 20:55:06.259524 kubelet[3233]: E1112 20:55:06.259391 3233 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/4a417df9-8ce6-4b01-9620-c159219f9e3e-kube-api-access-qx9cp podName:4a417df9-8ce6-4b01-9620-c159219f9e3e nodeName:}" failed. No retries permitted until 2024-11-12 20:55:06.759365932 +0000 UTC m=+14.094987329 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-qx9cp" (UniqueName: "kubernetes.io/projected/4a417df9-8ce6-4b01-9620-c159219f9e3e-kube-api-access-qx9cp") pod "kube-proxy-mbkdn" (UID: "4a417df9-8ce6-4b01-9620-c159219f9e3e") : configmap "kube-root-ca.crt" not found Nov 12 20:55:06.575573 kubelet[3233]: I1112 20:55:06.574937 3233 topology_manager.go:215] "Topology Admit Handler" podUID="3d15cabf-26a6-4ac6-bb57-f398575a0de4" podNamespace="tigera-operator" podName="tigera-operator-5645cfc98-6lcnw" Nov 12 20:55:06.590645 systemd[1]: Created slice kubepods-besteffort-pod3d15cabf_26a6_4ac6_bb57_f398575a0de4.slice - libcontainer container kubepods-besteffort-pod3d15cabf_26a6_4ac6_bb57_f398575a0de4.slice. Nov 12 20:55:06.657479 kubelet[3233]: I1112 20:55:06.657419 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3d15cabf-26a6-4ac6-bb57-f398575a0de4-var-lib-calico\") pod \"tigera-operator-5645cfc98-6lcnw\" (UID: \"3d15cabf-26a6-4ac6-bb57-f398575a0de4\") " pod="tigera-operator/tigera-operator-5645cfc98-6lcnw" Nov 12 20:55:06.657479 kubelet[3233]: I1112 20:55:06.657482 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtfkm\" (UniqueName: \"kubernetes.io/projected/3d15cabf-26a6-4ac6-bb57-f398575a0de4-kube-api-access-wtfkm\") pod \"tigera-operator-5645cfc98-6lcnw\" (UID: \"3d15cabf-26a6-4ac6-bb57-f398575a0de4\") " pod="tigera-operator/tigera-operator-5645cfc98-6lcnw" Nov 12 20:55:06.894403 containerd[1692]: time="2024-11-12T20:55:06.894293217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-6lcnw,Uid:3d15cabf-26a6-4ac6-bb57-f398575a0de4,Namespace:tigera-operator,Attempt:0,}" Nov 12 20:55:06.943896 containerd[1692]: time="2024-11-12T20:55:06.943798079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:06.944141 containerd[1692]: time="2024-11-12T20:55:06.943879680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:06.944141 containerd[1692]: time="2024-11-12T20:55:06.943897580Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.944141 containerd[1692]: time="2024-11-12T20:55:06.943994782Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:06.971119 systemd[1]: Started cri-containerd-9f7169d96ebfc58431334c9a0034ff8f7fde01896745f8d4b1a5384dbec44523.scope - libcontainer container 9f7169d96ebfc58431334c9a0034ff8f7fde01896745f8d4b1a5384dbec44523. Nov 12 20:55:06.985330 containerd[1692]: time="2024-11-12T20:55:06.985045930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbkdn,Uid:4a417df9-8ce6-4b01-9620-c159219f9e3e,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:07.011336 containerd[1692]: time="2024-11-12T20:55:07.011301381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5645cfc98-6lcnw,Uid:3d15cabf-26a6-4ac6-bb57-f398575a0de4,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9f7169d96ebfc58431334c9a0034ff8f7fde01896745f8d4b1a5384dbec44523\"" Nov 12 20:55:07.014155 containerd[1692]: time="2024-11-12T20:55:07.013877215Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\"" Nov 12 20:55:07.034619 containerd[1692]: time="2024-11-12T20:55:07.034493691Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:07.034619 containerd[1692]: time="2024-11-12T20:55:07.034555492Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:07.034619 containerd[1692]: time="2024-11-12T20:55:07.034573092Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:07.034983 containerd[1692]: time="2024-11-12T20:55:07.034660493Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:07.051155 systemd[1]: Started cri-containerd-5e6eee525d294d74a56ad3ebe634a667fd3a1d25ad87db999a1b4fef972c39a4.scope - libcontainer container 5e6eee525d294d74a56ad3ebe634a667fd3a1d25ad87db999a1b4fef972c39a4. Nov 12 20:55:07.072294 containerd[1692]: time="2024-11-12T20:55:07.072240395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mbkdn,Uid:4a417df9-8ce6-4b01-9620-c159219f9e3e,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e6eee525d294d74a56ad3ebe634a667fd3a1d25ad87db999a1b4fef972c39a4\"" Nov 12 20:55:07.075417 containerd[1692]: time="2024-11-12T20:55:07.075382837Z" level=info msg="CreateContainer within sandbox \"5e6eee525d294d74a56ad3ebe634a667fd3a1d25ad87db999a1b4fef972c39a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Nov 12 20:55:07.116862 containerd[1692]: time="2024-11-12T20:55:07.116807391Z" level=info msg="CreateContainer within sandbox \"5e6eee525d294d74a56ad3ebe634a667fd3a1d25ad87db999a1b4fef972c39a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8d9d4d295bfb5dc117527f447f0e369004eef2dfb828fd1e7256ff13d231c19d\"" Nov 12 20:55:07.117819 containerd[1692]: time="2024-11-12T20:55:07.117620702Z" level=info msg="StartContainer for \"8d9d4d295bfb5dc117527f447f0e369004eef2dfb828fd1e7256ff13d231c19d\"" Nov 12 20:55:07.144107 systemd[1]: Started cri-containerd-8d9d4d295bfb5dc117527f447f0e369004eef2dfb828fd1e7256ff13d231c19d.scope - libcontainer container 8d9d4d295bfb5dc117527f447f0e369004eef2dfb828fd1e7256ff13d231c19d. Nov 12 20:55:07.174344 containerd[1692]: time="2024-11-12T20:55:07.174230258Z" level=info msg="StartContainer for \"8d9d4d295bfb5dc117527f447f0e369004eef2dfb828fd1e7256ff13d231c19d\" returns successfully" Nov 12 20:55:10.991616 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4166121803.mount: Deactivated successfully. Nov 12 20:55:11.603292 containerd[1692]: time="2024-11-12T20:55:11.603232227Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:11.605255 containerd[1692]: time="2024-11-12T20:55:11.605127546Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.0: active requests=0, bytes read=21763323" Nov 12 20:55:11.610804 containerd[1692]: time="2024-11-12T20:55:11.609595189Z" level=info msg="ImageCreate event name:\"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:11.615720 containerd[1692]: time="2024-11-12T20:55:11.615643748Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:11.616638 containerd[1692]: time="2024-11-12T20:55:11.616480956Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.0\" with image id \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\", repo tag \"quay.io/tigera/operator:v1.36.0\", repo digest \"quay.io/tigera/operator@sha256:67a96f7dcdde24abff66b978202c5e64b9909f4a8fcd9357daca92b499b26e4d\", size \"21757542\" in 4.60256104s" Nov 12 20:55:11.616638 containerd[1692]: time="2024-11-12T20:55:11.616524157Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.0\" returns image reference \"sha256:6969e3644ac6358fd921194ec267a243ad5856f3d9595bdbb9a76dc5c5e9875d\"" Nov 12 20:55:11.619277 containerd[1692]: time="2024-11-12T20:55:11.619114282Z" level=info msg="CreateContainer within sandbox \"9f7169d96ebfc58431334c9a0034ff8f7fde01896745f8d4b1a5384dbec44523\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Nov 12 20:55:11.663077 containerd[1692]: time="2024-11-12T20:55:11.662923807Z" level=info msg="CreateContainer within sandbox \"9f7169d96ebfc58431334c9a0034ff8f7fde01896745f8d4b1a5384dbec44523\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"746d303ff9027090cb39bc20413be90d07f700f1eccd13cde347ac65761edb2d\"" Nov 12 20:55:11.664786 containerd[1692]: time="2024-11-12T20:55:11.663739215Z" level=info msg="StartContainer for \"746d303ff9027090cb39bc20413be90d07f700f1eccd13cde347ac65761edb2d\"" Nov 12 20:55:11.698151 systemd[1]: Started cri-containerd-746d303ff9027090cb39bc20413be90d07f700f1eccd13cde347ac65761edb2d.scope - libcontainer container 746d303ff9027090cb39bc20413be90d07f700f1eccd13cde347ac65761edb2d. Nov 12 20:55:11.727990 containerd[1692]: time="2024-11-12T20:55:11.727901639Z" level=info msg="StartContainer for \"746d303ff9027090cb39bc20413be90d07f700f1eccd13cde347ac65761edb2d\" returns successfully" Nov 12 20:55:11.867416 kubelet[3233]: I1112 20:55:11.866415 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mbkdn" podStartSLOduration=5.866392485 podStartE2EDuration="5.866392485s" podCreationTimestamp="2024-11-12 20:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:07.853318134 +0000 UTC m=+15.188939531" watchObservedRunningTime="2024-11-12 20:55:11.866392485 +0000 UTC m=+19.202013882" Nov 12 20:55:14.794679 kubelet[3233]: I1112 20:55:14.794391 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5645cfc98-6lcnw" podStartSLOduration=4.190022081 podStartE2EDuration="8.794366441s" podCreationTimestamp="2024-11-12 20:55:06 +0000 UTC" firstStartedPulling="2024-11-12 20:55:07.013192506 +0000 UTC m=+14.348814003" lastFinishedPulling="2024-11-12 20:55:11.617536866 +0000 UTC m=+18.953158363" observedRunningTime="2024-11-12 20:55:11.866814589 +0000 UTC m=+19.202436086" watchObservedRunningTime="2024-11-12 20:55:14.794366441 +0000 UTC m=+22.129987938" Nov 12 20:55:14.796691 kubelet[3233]: I1112 20:55:14.795359 3233 topology_manager.go:215] "Topology Admit Handler" podUID="bb5b8a50-1c6d-45d1-9c8e-750ff698926f" podNamespace="calico-system" podName="calico-typha-6cfc7c4dd9-2d9kl" Nov 12 20:55:14.806271 systemd[1]: Created slice kubepods-besteffort-podbb5b8a50_1c6d_45d1_9c8e_750ff698926f.slice - libcontainer container kubepods-besteffort-podbb5b8a50_1c6d_45d1_9c8e_750ff698926f.slice. Nov 12 20:55:14.908085 kubelet[3233]: I1112 20:55:14.907866 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bb5b8a50-1c6d-45d1-9c8e-750ff698926f-tigera-ca-bundle\") pod \"calico-typha-6cfc7c4dd9-2d9kl\" (UID: \"bb5b8a50-1c6d-45d1-9c8e-750ff698926f\") " pod="calico-system/calico-typha-6cfc7c4dd9-2d9kl" Nov 12 20:55:14.908085 kubelet[3233]: I1112 20:55:14.907937 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bb5b8a50-1c6d-45d1-9c8e-750ff698926f-typha-certs\") pod \"calico-typha-6cfc7c4dd9-2d9kl\" (UID: \"bb5b8a50-1c6d-45d1-9c8e-750ff698926f\") " pod="calico-system/calico-typha-6cfc7c4dd9-2d9kl" Nov 12 20:55:14.908085 kubelet[3233]: I1112 20:55:14.907980 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kr6g9\" (UniqueName: \"kubernetes.io/projected/bb5b8a50-1c6d-45d1-9c8e-750ff698926f-kube-api-access-kr6g9\") pod \"calico-typha-6cfc7c4dd9-2d9kl\" (UID: \"bb5b8a50-1c6d-45d1-9c8e-750ff698926f\") " pod="calico-system/calico-typha-6cfc7c4dd9-2d9kl" Nov 12 20:55:14.968702 kubelet[3233]: I1112 20:55:14.968394 3233 topology_manager.go:215] "Topology Admit Handler" podUID="49b25c4b-196f-4ddd-883c-b72a664742f3" podNamespace="calico-system" podName="calico-node-fnnmf" Nov 12 20:55:14.980989 systemd[1]: Created slice kubepods-besteffort-pod49b25c4b_196f_4ddd_883c_b72a664742f3.slice - libcontainer container kubepods-besteffort-pod49b25c4b_196f_4ddd_883c_b72a664742f3.slice. Nov 12 20:55:15.008514 kubelet[3233]: I1112 20:55:15.008387 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49b25c4b-196f-4ddd-883c-b72a664742f3-tigera-ca-bundle\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.008514 kubelet[3233]: I1112 20:55:15.008464 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-var-lib-calico\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.008514 kubelet[3233]: I1112 20:55:15.008482 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-cni-bin-dir\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.008807 kubelet[3233]: I1112 20:55:15.008505 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7x5d\" (UniqueName: \"kubernetes.io/projected/49b25c4b-196f-4ddd-883c-b72a664742f3-kube-api-access-z7x5d\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009615 kubelet[3233]: I1112 20:55:15.008579 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-flexvol-driver-host\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009615 kubelet[3233]: I1112 20:55:15.008971 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-lib-modules\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009615 kubelet[3233]: I1112 20:55:15.009000 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-xtables-lock\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009615 kubelet[3233]: I1112 20:55:15.009060 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-policysync\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009615 kubelet[3233]: I1112 20:55:15.009083 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-cni-log-dir\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009871 kubelet[3233]: I1112 20:55:15.009106 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-cni-net-dir\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009871 kubelet[3233]: I1112 20:55:15.009130 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/49b25c4b-196f-4ddd-883c-b72a664742f3-node-certs\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.009871 kubelet[3233]: I1112 20:55:15.009155 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/49b25c4b-196f-4ddd-883c-b72a664742f3-var-run-calico\") pod \"calico-node-fnnmf\" (UID: \"49b25c4b-196f-4ddd-883c-b72a664742f3\") " pod="calico-system/calico-node-fnnmf" Nov 12 20:55:15.123412 kubelet[3233]: E1112 20:55:15.123356 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.123412 kubelet[3233]: W1112 20:55:15.123393 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.123452 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.123753 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.124447 kubelet[3233]: W1112 20:55:15.123768 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.123795 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.124010 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.124447 kubelet[3233]: W1112 20:55:15.124020 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.124032 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.124341 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.124447 kubelet[3233]: W1112 20:55:15.124352 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.124447 kubelet[3233]: E1112 20:55:15.124365 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.124685 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128276 kubelet[3233]: W1112 20:55:15.124698 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.124713 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.124935 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128276 kubelet[3233]: W1112 20:55:15.124946 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.124988 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.125202 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128276 kubelet[3233]: W1112 20:55:15.125214 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.125227 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128276 kubelet[3233]: E1112 20:55:15.125465 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128667 kubelet[3233]: W1112 20:55:15.125588 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.128667 kubelet[3233]: E1112 20:55:15.125608 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128667 kubelet[3233]: E1112 20:55:15.125851 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128667 kubelet[3233]: W1112 20:55:15.125863 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.128667 kubelet[3233]: E1112 20:55:15.125876 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128667 kubelet[3233]: E1112 20:55:15.126270 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128667 kubelet[3233]: W1112 20:55:15.126283 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.128667 kubelet[3233]: E1112 20:55:15.126298 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.128667 kubelet[3233]: E1112 20:55:15.126566 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.128667 kubelet[3233]: W1112 20:55:15.126579 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.126593 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.127053 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.133176 kubelet[3233]: W1112 20:55:15.127066 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.127081 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.127263 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.133176 kubelet[3233]: W1112 20:55:15.127272 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.127283 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.127481 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.133176 kubelet[3233]: W1112 20:55:15.127490 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.133176 kubelet[3233]: E1112 20:55:15.127501 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.133718 containerd[1692]: time="2024-11-12T20:55:15.130495508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfc7c4dd9-2d9kl,Uid:bb5b8a50-1c6d-45d1-9c8e-750ff698926f,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.128301 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.134584 kubelet[3233]: W1112 20:55:15.128313 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.128327 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.129027 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.134584 kubelet[3233]: W1112 20:55:15.129038 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.129054 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.129689 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.134584 kubelet[3233]: W1112 20:55:15.129700 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.129715 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.134584 kubelet[3233]: E1112 20:55:15.131216 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.136809 kubelet[3233]: W1112 20:55:15.131239 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.136809 kubelet[3233]: E1112 20:55:15.131260 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.136809 kubelet[3233]: E1112 20:55:15.131631 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.136809 kubelet[3233]: W1112 20:55:15.131644 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.136809 kubelet[3233]: E1112 20:55:15.131789 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.136809 kubelet[3233]: E1112 20:55:15.132036 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.136809 kubelet[3233]: W1112 20:55:15.132047 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.136809 kubelet[3233]: E1112 20:55:15.132060 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.136809 kubelet[3233]: E1112 20:55:15.132373 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.136809 kubelet[3233]: W1112 20:55:15.132398 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.137229 kubelet[3233]: E1112 20:55:15.132412 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.137229 kubelet[3233]: I1112 20:55:15.133587 3233 topology_manager.go:215] "Topology Admit Handler" podUID="71853af8-7114-4e11-9b62-8d92def4793d" podNamespace="calico-system" podName="csi-node-driver-jt7lt" Nov 12 20:55:15.137229 kubelet[3233]: E1112 20:55:15.133942 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:15.137229 kubelet[3233]: E1112 20:55:15.136652 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.137229 kubelet[3233]: W1112 20:55:15.136676 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.137229 kubelet[3233]: E1112 20:55:15.136690 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.151994 kubelet[3233]: E1112 20:55:15.150017 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.151994 kubelet[3233]: W1112 20:55:15.150045 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.151994 kubelet[3233]: E1112 20:55:15.150069 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.192173 kubelet[3233]: E1112 20:55:15.191849 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.192173 kubelet[3233]: W1112 20:55:15.191899 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.192173 kubelet[3233]: E1112 20:55:15.191995 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.192616 kubelet[3233]: E1112 20:55:15.192426 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.192616 kubelet[3233]: W1112 20:55:15.192442 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.192616 kubelet[3233]: E1112 20:55:15.192460 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.192952 kubelet[3233]: E1112 20:55:15.192789 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.192952 kubelet[3233]: W1112 20:55:15.192802 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.192952 kubelet[3233]: E1112 20:55:15.192915 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.194375 kubelet[3233]: E1112 20:55:15.193711 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.194375 kubelet[3233]: W1112 20:55:15.193729 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.194375 kubelet[3233]: E1112 20:55:15.193746 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.194375 kubelet[3233]: E1112 20:55:15.194081 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.194375 kubelet[3233]: W1112 20:55:15.194093 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.194375 kubelet[3233]: E1112 20:55:15.194108 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.195260 kubelet[3233]: E1112 20:55:15.195162 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.195260 kubelet[3233]: W1112 20:55:15.195174 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.195260 kubelet[3233]: E1112 20:55:15.195190 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.195413 kubelet[3233]: E1112 20:55:15.195397 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.195413 kubelet[3233]: W1112 20:55:15.195409 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.195539 kubelet[3233]: E1112 20:55:15.195422 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.195669 kubelet[3233]: E1112 20:55:15.195618 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.195669 kubelet[3233]: W1112 20:55:15.195636 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.195669 kubelet[3233]: E1112 20:55:15.195649 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.196200 kubelet[3233]: E1112 20:55:15.196178 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.196200 kubelet[3233]: W1112 20:55:15.196198 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.196850 kubelet[3233]: E1112 20:55:15.196213 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.196850 kubelet[3233]: E1112 20:55:15.196654 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.196850 kubelet[3233]: W1112 20:55:15.196667 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.196850 kubelet[3233]: E1112 20:55:15.196789 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.197363 kubelet[3233]: E1112 20:55:15.197343 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.197363 kubelet[3233]: W1112 20:55:15.197363 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.197517 kubelet[3233]: E1112 20:55:15.197380 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.198227 kubelet[3233]: E1112 20:55:15.198115 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.198227 kubelet[3233]: W1112 20:55:15.198131 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.198227 kubelet[3233]: E1112 20:55:15.198146 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.198473 kubelet[3233]: E1112 20:55:15.198448 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.198473 kubelet[3233]: W1112 20:55:15.198467 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.198731 kubelet[3233]: E1112 20:55:15.198481 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.199364 kubelet[3233]: E1112 20:55:15.199312 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.199364 kubelet[3233]: W1112 20:55:15.199333 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.199364 kubelet[3233]: E1112 20:55:15.199348 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.200301 kubelet[3233]: E1112 20:55:15.199554 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.200301 kubelet[3233]: W1112 20:55:15.199565 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.200301 kubelet[3233]: E1112 20:55:15.199578 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.200301 kubelet[3233]: E1112 20:55:15.200108 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.200301 kubelet[3233]: W1112 20:55:15.200122 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.200301 kubelet[3233]: E1112 20:55:15.200137 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.201083 kubelet[3233]: E1112 20:55:15.201063 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.201083 kubelet[3233]: W1112 20:55:15.201083 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.201238 kubelet[3233]: E1112 20:55:15.201098 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.201592 kubelet[3233]: E1112 20:55:15.201340 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.201592 kubelet[3233]: W1112 20:55:15.201372 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.201592 kubelet[3233]: E1112 20:55:15.201387 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.202917 kubelet[3233]: E1112 20:55:15.202089 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.202917 kubelet[3233]: W1112 20:55:15.202110 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.202917 kubelet[3233]: E1112 20:55:15.202130 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.203678 kubelet[3233]: E1112 20:55:15.202374 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.203678 kubelet[3233]: W1112 20:55:15.203672 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.203797 kubelet[3233]: E1112 20:55:15.203688 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.210753 containerd[1692]: time="2024-11-12T20:55:15.210658503Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:15.210872 kubelet[3233]: E1112 20:55:15.210855 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.211035 kubelet[3233]: W1112 20:55:15.210874 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.211035 kubelet[3233]: E1112 20:55:15.210890 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.211035 kubelet[3233]: I1112 20:55:15.210932 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/71853af8-7114-4e11-9b62-8d92def4793d-varrun\") pod \"csi-node-driver-jt7lt\" (UID: \"71853af8-7114-4e11-9b62-8d92def4793d\") " pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:15.212086 kubelet[3233]: E1112 20:55:15.212054 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.212086 kubelet[3233]: W1112 20:55:15.212075 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.212249 kubelet[3233]: E1112 20:55:15.212102 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.212249 kubelet[3233]: I1112 20:55:15.212130 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/71853af8-7114-4e11-9b62-8d92def4793d-socket-dir\") pod \"csi-node-driver-jt7lt\" (UID: \"71853af8-7114-4e11-9b62-8d92def4793d\") " pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:15.212654 containerd[1692]: time="2024-11-12T20:55:15.212497127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:15.213051 containerd[1692]: time="2024-11-12T20:55:15.212914933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:15.213209 kubelet[3233]: E1112 20:55:15.213045 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.213209 kubelet[3233]: W1112 20:55:15.213075 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.213209 kubelet[3233]: E1112 20:55:15.213108 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.213209 kubelet[3233]: I1112 20:55:15.213137 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcgc7\" (UniqueName: \"kubernetes.io/projected/71853af8-7114-4e11-9b62-8d92def4793d-kube-api-access-wcgc7\") pod \"csi-node-driver-jt7lt\" (UID: \"71853af8-7114-4e11-9b62-8d92def4793d\") " pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:15.214412 containerd[1692]: time="2024-11-12T20:55:15.213866545Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:15.214494 kubelet[3233]: E1112 20:55:15.214151 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.214494 kubelet[3233]: W1112 20:55:15.214164 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.214494 kubelet[3233]: E1112 20:55:15.214195 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.215402 kubelet[3233]: E1112 20:55:15.214509 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.215402 kubelet[3233]: W1112 20:55:15.214523 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.215402 kubelet[3233]: E1112 20:55:15.214552 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.215778 kubelet[3233]: E1112 20:55:15.215634 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.215778 kubelet[3233]: W1112 20:55:15.215650 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.215778 kubelet[3233]: E1112 20:55:15.215694 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.215778 kubelet[3233]: I1112 20:55:15.215728 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/71853af8-7114-4e11-9b62-8d92def4793d-registration-dir\") pod \"csi-node-driver-jt7lt\" (UID: \"71853af8-7114-4e11-9b62-8d92def4793d\") " pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:15.216510 kubelet[3233]: E1112 20:55:15.215998 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.216510 kubelet[3233]: W1112 20:55:15.216010 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.216791 kubelet[3233]: E1112 20:55:15.216616 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.217336 kubelet[3233]: E1112 20:55:15.217202 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.217336 kubelet[3233]: W1112 20:55:15.217217 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.217336 kubelet[3233]: E1112 20:55:15.217231 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.219087 kubelet[3233]: E1112 20:55:15.219065 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.219087 kubelet[3233]: W1112 20:55:15.219086 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.219303 kubelet[3233]: E1112 20:55:15.219108 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.221031 kubelet[3233]: E1112 20:55:15.220951 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.221031 kubelet[3233]: W1112 20:55:15.221000 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.221031 kubelet[3233]: E1112 20:55:15.221017 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.221888 kubelet[3233]: E1112 20:55:15.221867 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.221888 kubelet[3233]: W1112 20:55:15.221886 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.222072 kubelet[3233]: E1112 20:55:15.221901 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.224034 kubelet[3233]: E1112 20:55:15.224009 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.224034 kubelet[3233]: W1112 20:55:15.224031 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.224656 kubelet[3233]: E1112 20:55:15.224046 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.224656 kubelet[3233]: E1112 20:55:15.224315 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.224656 kubelet[3233]: W1112 20:55:15.224326 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.224656 kubelet[3233]: E1112 20:55:15.224341 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.224656 kubelet[3233]: I1112 20:55:15.224376 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/71853af8-7114-4e11-9b62-8d92def4793d-kubelet-dir\") pod \"csi-node-driver-jt7lt\" (UID: \"71853af8-7114-4e11-9b62-8d92def4793d\") " pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:15.225149 kubelet[3233]: E1112 20:55:15.225000 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.225149 kubelet[3233]: W1112 20:55:15.225020 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.225149 kubelet[3233]: E1112 20:55:15.225040 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.225630 kubelet[3233]: E1112 20:55:15.225462 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.225630 kubelet[3233]: W1112 20:55:15.225477 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.225630 kubelet[3233]: E1112 20:55:15.225596 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.257182 systemd[1]: Started cri-containerd-1df834dd4a812b6f41bddaa149809657f36192ab2e90fd780785a000552b820b.scope - libcontainer container 1df834dd4a812b6f41bddaa149809657f36192ab2e90fd780785a000552b820b. Nov 12 20:55:15.284695 containerd[1692]: time="2024-11-12T20:55:15.284583576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnnmf,Uid:49b25c4b-196f-4ddd-883c-b72a664742f3,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:15.319101 containerd[1692]: time="2024-11-12T20:55:15.318433522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-6cfc7c4dd9-2d9kl,Uid:bb5b8a50-1c6d-45d1-9c8e-750ff698926f,Namespace:calico-system,Attempt:0,} returns sandbox id \"1df834dd4a812b6f41bddaa149809657f36192ab2e90fd780785a000552b820b\"" Nov 12 20:55:15.321897 containerd[1692]: time="2024-11-12T20:55:15.321850167Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\"" Nov 12 20:55:15.325918 kubelet[3233]: E1112 20:55:15.325804 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.326899 kubelet[3233]: W1112 20:55:15.326292 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.326899 kubelet[3233]: E1112 20:55:15.326348 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.328258 kubelet[3233]: E1112 20:55:15.327937 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.328258 kubelet[3233]: W1112 20:55:15.327955 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.328258 kubelet[3233]: E1112 20:55:15.327992 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.329343 kubelet[3233]: E1112 20:55:15.329308 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.329343 kubelet[3233]: W1112 20:55:15.329324 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.330358 kubelet[3233]: E1112 20:55:15.329687 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.332280 kubelet[3233]: E1112 20:55:15.331929 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.332280 kubelet[3233]: W1112 20:55:15.331953 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.332280 kubelet[3233]: E1112 20:55:15.332006 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.333138 kubelet[3233]: E1112 20:55:15.333078 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.334109 kubelet[3233]: W1112 20:55:15.333365 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.334109 kubelet[3233]: E1112 20:55:15.333526 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.335337 kubelet[3233]: E1112 20:55:15.335025 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.335337 kubelet[3233]: W1112 20:55:15.335044 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.336426 kubelet[3233]: E1112 20:55:15.336008 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.336426 kubelet[3233]: E1112 20:55:15.336365 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.336426 kubelet[3233]: W1112 20:55:15.336378 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.337094 kubelet[3233]: E1112 20:55:15.336696 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.338226 kubelet[3233]: E1112 20:55:15.337772 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.338226 kubelet[3233]: W1112 20:55:15.337788 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.339313 kubelet[3233]: E1112 20:55:15.339055 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.340008 kubelet[3233]: E1112 20:55:15.339852 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.340008 kubelet[3233]: W1112 20:55:15.339867 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.341178 kubelet[3233]: E1112 20:55:15.340524 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.341178 kubelet[3233]: E1112 20:55:15.341129 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.341178 kubelet[3233]: W1112 20:55:15.341142 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.341348 kubelet[3233]: E1112 20:55:15.341179 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.342098 kubelet[3233]: E1112 20:55:15.341406 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.342098 kubelet[3233]: W1112 20:55:15.341418 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.342098 kubelet[3233]: E1112 20:55:15.341651 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.342098 kubelet[3233]: W1112 20:55:15.341667 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.342098 kubelet[3233]: E1112 20:55:15.341754 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.342098 kubelet[3233]: E1112 20:55:15.341774 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.342098 kubelet[3233]: E1112 20:55:15.341936 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.342098 kubelet[3233]: W1112 20:55:15.341948 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.342432 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.344540 kubelet[3233]: W1112 20:55:15.342444 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.342656 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.342685 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.342931 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.344540 kubelet[3233]: W1112 20:55:15.343044 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.343068 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.343895 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.344540 kubelet[3233]: W1112 20:55:15.343909 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.344540 kubelet[3233]: E1112 20:55:15.344135 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.344992 kubelet[3233]: E1112 20:55:15.344734 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.344992 kubelet[3233]: W1112 20:55:15.344746 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.345162 kubelet[3233]: E1112 20:55:15.345088 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.345821 kubelet[3233]: E1112 20:55:15.345536 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.345821 kubelet[3233]: W1112 20:55:15.345560 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.346209 kubelet[3233]: E1112 20:55:15.346002 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.346818 kubelet[3233]: E1112 20:55:15.346561 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.346818 kubelet[3233]: W1112 20:55:15.346579 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.347338 kubelet[3233]: E1112 20:55:15.347108 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.347871 kubelet[3233]: E1112 20:55:15.347549 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.347871 kubelet[3233]: W1112 20:55:15.347564 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.348353 kubelet[3233]: E1112 20:55:15.348211 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.348738 kubelet[3233]: E1112 20:55:15.348592 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.348738 kubelet[3233]: W1112 20:55:15.348614 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.349326 kubelet[3233]: E1112 20:55:15.349099 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.350072 kubelet[3233]: E1112 20:55:15.349902 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.350072 kubelet[3233]: W1112 20:55:15.349920 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.350072 kubelet[3233]: E1112 20:55:15.350011 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.351256 kubelet[3233]: E1112 20:55:15.350836 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.351256 kubelet[3233]: W1112 20:55:15.350851 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.351256 kubelet[3233]: E1112 20:55:15.350908 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.351786 kubelet[3233]: E1112 20:55:15.351670 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.351786 kubelet[3233]: W1112 20:55:15.351684 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.352140 kubelet[3233]: E1112 20:55:15.352015 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.352140 kubelet[3233]: E1112 20:55:15.352095 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.352140 kubelet[3233]: W1112 20:55:15.352104 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.352140 kubelet[3233]: E1112 20:55:15.352116 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.352772 containerd[1692]: time="2024-11-12T20:55:15.352563372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:15.352772 containerd[1692]: time="2024-11-12T20:55:15.352657673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:15.352772 containerd[1692]: time="2024-11-12T20:55:15.352690173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:15.354545 containerd[1692]: time="2024-11-12T20:55:15.353829088Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:15.367920 kubelet[3233]: E1112 20:55:15.367895 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:15.368459 kubelet[3233]: W1112 20:55:15.368431 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:15.368624 kubelet[3233]: E1112 20:55:15.368582 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:15.383558 systemd[1]: Started cri-containerd-84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c.scope - libcontainer container 84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c. Nov 12 20:55:15.421105 containerd[1692]: time="2024-11-12T20:55:15.420751169Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fnnmf,Uid:49b25c4b-196f-4ddd-883c-b72a664742f3,Namespace:calico-system,Attempt:0,} returns sandbox id \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\"" Nov 12 20:55:16.763989 kubelet[3233]: E1112 20:55:16.762712 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:17.881499 containerd[1692]: time="2024-11-12T20:55:17.881439869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:17.886912 containerd[1692]: time="2024-11-12T20:55:17.886833540Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.0: active requests=0, bytes read=29849168" Nov 12 20:55:17.893120 containerd[1692]: time="2024-11-12T20:55:17.892579016Z" level=info msg="ImageCreate event name:\"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:17.899153 containerd[1692]: time="2024-11-12T20:55:17.899063401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:17.899840 containerd[1692]: time="2024-11-12T20:55:17.899674310Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.0\" with image id \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:850e5f751e100580bffb57d1b70d4e90d90ecaab5ef1b6dc6a43dcd34a5e1057\", size \"31342252\" in 2.57767964s" Nov 12 20:55:17.899840 containerd[1692]: time="2024-11-12T20:55:17.899713910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.0\" returns image reference \"sha256:eb8a933b39daca50b75ccf193cc6193e39512bc996c16898d43d4c1f39c8603b\"" Nov 12 20:55:17.903147 containerd[1692]: time="2024-11-12T20:55:17.902855951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\"" Nov 12 20:55:17.916247 containerd[1692]: time="2024-11-12T20:55:17.915881923Z" level=info msg="CreateContainer within sandbox \"1df834dd4a812b6f41bddaa149809657f36192ab2e90fd780785a000552b820b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Nov 12 20:55:17.957393 containerd[1692]: time="2024-11-12T20:55:17.957343269Z" level=info msg="CreateContainer within sandbox \"1df834dd4a812b6f41bddaa149809657f36192ab2e90fd780785a000552b820b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"9384eea55d2f5a35b33c04c779c057dd74af5d9c4f28799bc8b1dc795cc54c68\"" Nov 12 20:55:17.958078 containerd[1692]: time="2024-11-12T20:55:17.958003578Z" level=info msg="StartContainer for \"9384eea55d2f5a35b33c04c779c057dd74af5d9c4f28799bc8b1dc795cc54c68\"" Nov 12 20:55:17.993181 systemd[1]: Started cri-containerd-9384eea55d2f5a35b33c04c779c057dd74af5d9c4f28799bc8b1dc795cc54c68.scope - libcontainer container 9384eea55d2f5a35b33c04c779c057dd74af5d9c4f28799bc8b1dc795cc54c68. Nov 12 20:55:18.048067 containerd[1692]: time="2024-11-12T20:55:18.047880761Z" level=info msg="StartContainer for \"9384eea55d2f5a35b33c04c779c057dd74af5d9c4f28799bc8b1dc795cc54c68\" returns successfully" Nov 12 20:55:18.764460 kubelet[3233]: E1112 20:55:18.763892 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:18.931391 kubelet[3233]: E1112 20:55:18.931100 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.931391 kubelet[3233]: W1112 20:55:18.931140 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.931391 kubelet[3233]: E1112 20:55:18.931172 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.932680 kubelet[3233]: E1112 20:55:18.931979 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.932680 kubelet[3233]: W1112 20:55:18.932000 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.932680 kubelet[3233]: E1112 20:55:18.932022 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.932680 kubelet[3233]: E1112 20:55:18.932286 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.932680 kubelet[3233]: W1112 20:55:18.932299 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.932680 kubelet[3233]: E1112 20:55:18.932315 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.932680 kubelet[3233]: E1112 20:55:18.932615 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.932680 kubelet[3233]: W1112 20:55:18.932648 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.932680 kubelet[3233]: E1112 20:55:18.932665 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.933539 kubelet[3233]: E1112 20:55:18.933521 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.933669 kubelet[3233]: W1112 20:55:18.933649 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.933794 kubelet[3233]: E1112 20:55:18.933778 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.934290 kubelet[3233]: E1112 20:55:18.934271 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.934408 kubelet[3233]: W1112 20:55:18.934382 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.934408 kubelet[3233]: E1112 20:55:18.934404 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.934692 kubelet[3233]: E1112 20:55:18.934672 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.934692 kubelet[3233]: W1112 20:55:18.934687 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.934912 kubelet[3233]: E1112 20:55:18.934705 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.935021 kubelet[3233]: E1112 20:55:18.934944 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.935021 kubelet[3233]: W1112 20:55:18.934979 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.935021 kubelet[3233]: E1112 20:55:18.935007 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.935454 kubelet[3233]: E1112 20:55:18.935423 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.935454 kubelet[3233]: W1112 20:55:18.935441 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.935617 kubelet[3233]: E1112 20:55:18.935460 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.935750 kubelet[3233]: E1112 20:55:18.935709 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.935750 kubelet[3233]: W1112 20:55:18.935722 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.935750 kubelet[3233]: E1112 20:55:18.935737 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.936048 kubelet[3233]: E1112 20:55:18.936034 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.936048 kubelet[3233]: W1112 20:55:18.936048 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.936208 kubelet[3233]: E1112 20:55:18.936064 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.936326 kubelet[3233]: E1112 20:55:18.936303 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.936326 kubelet[3233]: W1112 20:55:18.936319 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.936326 kubelet[3233]: E1112 20:55:18.936334 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.936615 kubelet[3233]: E1112 20:55:18.936562 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.936615 kubelet[3233]: W1112 20:55:18.936574 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.936615 kubelet[3233]: E1112 20:55:18.936588 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.936902 kubelet[3233]: E1112 20:55:18.936814 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.936902 kubelet[3233]: W1112 20:55:18.936826 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.936902 kubelet[3233]: E1112 20:55:18.936840 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.937116 kubelet[3233]: E1112 20:55:18.937058 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.937116 kubelet[3233]: W1112 20:55:18.937067 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.937116 kubelet[3233]: E1112 20:55:18.937080 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.968761 kubelet[3233]: E1112 20:55:18.968715 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.968761 kubelet[3233]: W1112 20:55:18.968742 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.968761 kubelet[3233]: E1112 20:55:18.968771 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.969273 kubelet[3233]: E1112 20:55:18.969142 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.969273 kubelet[3233]: W1112 20:55:18.969162 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.969273 kubelet[3233]: E1112 20:55:18.969188 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.969592 kubelet[3233]: E1112 20:55:18.969510 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.969592 kubelet[3233]: W1112 20:55:18.969524 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.969592 kubelet[3233]: E1112 20:55:18.969556 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.969916 kubelet[3233]: E1112 20:55:18.969893 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.969916 kubelet[3233]: W1112 20:55:18.969911 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.970101 kubelet[3233]: E1112 20:55:18.969944 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.970280 kubelet[3233]: E1112 20:55:18.970257 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.970280 kubelet[3233]: W1112 20:55:18.970275 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.970406 kubelet[3233]: E1112 20:55:18.970302 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.970634 kubelet[3233]: E1112 20:55:18.970613 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.970634 kubelet[3233]: W1112 20:55:18.970629 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.970831 kubelet[3233]: E1112 20:55:18.970769 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.970950 kubelet[3233]: E1112 20:55:18.970932 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.970950 kubelet[3233]: W1112 20:55:18.970946 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.971106 kubelet[3233]: E1112 20:55:18.971088 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.971279 kubelet[3233]: E1112 20:55:18.971261 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.971279 kubelet[3233]: W1112 20:55:18.971277 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.971430 kubelet[3233]: E1112 20:55:18.971297 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.971579 kubelet[3233]: E1112 20:55:18.971559 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.971579 kubelet[3233]: W1112 20:55:18.971575 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.971703 kubelet[3233]: E1112 20:55:18.971607 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.971907 kubelet[3233]: E1112 20:55:18.971887 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.971907 kubelet[3233]: W1112 20:55:18.971902 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.972078 kubelet[3233]: E1112 20:55:18.971925 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.972336 kubelet[3233]: E1112 20:55:18.972312 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.972336 kubelet[3233]: W1112 20:55:18.972328 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.972474 kubelet[3233]: E1112 20:55:18.972345 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.972931 kubelet[3233]: E1112 20:55:18.972910 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.972931 kubelet[3233]: W1112 20:55:18.972928 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.973107 kubelet[3233]: E1112 20:55:18.973072 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.973408 kubelet[3233]: E1112 20:55:18.973387 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.973408 kubelet[3233]: W1112 20:55:18.973404 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.973533 kubelet[3233]: E1112 20:55:18.973428 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.973694 kubelet[3233]: E1112 20:55:18.973674 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.973694 kubelet[3233]: W1112 20:55:18.973689 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.973845 kubelet[3233]: E1112 20:55:18.973712 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.974052 kubelet[3233]: E1112 20:55:18.974031 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.974052 kubelet[3233]: W1112 20:55:18.974047 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.974186 kubelet[3233]: E1112 20:55:18.974082 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.974391 kubelet[3233]: E1112 20:55:18.974370 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.974391 kubelet[3233]: W1112 20:55:18.974386 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.974533 kubelet[3233]: E1112 20:55:18.974408 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.974874 kubelet[3233]: E1112 20:55:18.974854 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.974874 kubelet[3233]: W1112 20:55:18.974871 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.975046 kubelet[3233]: E1112 20:55:18.975013 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:18.975253 kubelet[3233]: E1112 20:55:18.975233 3233 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Nov 12 20:55:18.975253 kubelet[3233]: W1112 20:55:18.975248 3233 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Nov 12 20:55:18.975356 kubelet[3233]: E1112 20:55:18.975265 3233 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Nov 12 20:55:19.641201 containerd[1692]: time="2024-11-12T20:55:19.641151426Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:19.643706 containerd[1692]: time="2024-11-12T20:55:19.643523056Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0: active requests=0, bytes read=5362116" Nov 12 20:55:19.646992 containerd[1692]: time="2024-11-12T20:55:19.646801199Z" level=info msg="ImageCreate event name:\"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:19.650672 containerd[1692]: time="2024-11-12T20:55:19.650620947Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:19.651483 containerd[1692]: time="2024-11-12T20:55:19.651254356Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" with image id \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:bed11f00e388b9bbf6eb3be410d4bc86d7020f790902b87f9e330df5a2058769\", size \"6855168\" in 1.748355703s" Nov 12 20:55:19.651483 containerd[1692]: time="2024-11-12T20:55:19.651303556Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.0\" returns image reference \"sha256:3fbafc0cb73520aede9a07469f27fd8798e681807d14465761f19c8c2bda1cec\"" Nov 12 20:55:19.654081 containerd[1692]: time="2024-11-12T20:55:19.654050691Z" level=info msg="CreateContainer within sandbox \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Nov 12 20:55:19.703162 containerd[1692]: time="2024-11-12T20:55:19.703111320Z" level=info msg="CreateContainer within sandbox \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b\"" Nov 12 20:55:19.703810 containerd[1692]: time="2024-11-12T20:55:19.703715028Z" level=info msg="StartContainer for \"968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b\"" Nov 12 20:55:19.740131 systemd[1]: Started cri-containerd-968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b.scope - libcontainer container 968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b. Nov 12 20:55:19.770629 containerd[1692]: time="2024-11-12T20:55:19.770576185Z" level=info msg="StartContainer for \"968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b\" returns successfully" Nov 12 20:55:19.783446 systemd[1]: cri-containerd-968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b.scope: Deactivated successfully. Nov 12 20:55:19.807632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b-rootfs.mount: Deactivated successfully. Nov 12 20:55:19.874387 kubelet[3233]: I1112 20:55:19.873344 3233 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:19.899782 kubelet[3233]: I1112 20:55:19.891241 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-6cfc7c4dd9-2d9kl" podStartSLOduration=3.310644452 podStartE2EDuration="5.891215531s" podCreationTimestamp="2024-11-12 20:55:14 +0000 UTC" firstStartedPulling="2024-11-12 20:55:15.320190545 +0000 UTC m=+22.655811942" lastFinishedPulling="2024-11-12 20:55:17.900761524 +0000 UTC m=+25.236383021" observedRunningTime="2024-11-12 20:55:18.885817194 +0000 UTC m=+26.221438591" watchObservedRunningTime="2024-11-12 20:55:19.891215531 +0000 UTC m=+27.226836928" Nov 12 20:55:20.764439 kubelet[3233]: E1112 20:55:20.763184 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:21.068474 containerd[1692]: time="2024-11-12T20:55:21.068388219Z" level=info msg="shim disconnected" id=968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b namespace=k8s.io Nov 12 20:55:21.068474 containerd[1692]: time="2024-11-12T20:55:21.068467120Z" level=warning msg="cleaning up after shim disconnected" id=968650a8ffa3af9ba2028f1eb3efbd5020f3f671ac1b25841e651d290db41a4b namespace=k8s.io Nov 12 20:55:21.068474 containerd[1692]: time="2024-11-12T20:55:21.068478620Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:21.881532 containerd[1692]: time="2024-11-12T20:55:21.881477040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\"" Nov 12 20:55:22.764699 kubelet[3233]: E1112 20:55:22.763337 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:24.763946 kubelet[3233]: E1112 20:55:24.763516 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:26.764291 kubelet[3233]: E1112 20:55:26.764239 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:27.168395 containerd[1692]: time="2024-11-12T20:55:27.168340653Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:27.170318 containerd[1692]: time="2024-11-12T20:55:27.170251574Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.0: active requests=0, bytes read=96163683" Nov 12 20:55:27.173620 containerd[1692]: time="2024-11-12T20:55:27.173550310Z" level=info msg="ImageCreate event name:\"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:27.177778 containerd[1692]: time="2024-11-12T20:55:27.177747755Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:27.178979 containerd[1692]: time="2024-11-12T20:55:27.178403963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.0\" with image id \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:a7c1b02375aa96ae882655397cd9dd0dcc867d9587ce7b866cf9cd65fd7ca1dd\", size \"97656775\" in 5.296869621s" Nov 12 20:55:27.178979 containerd[1692]: time="2024-11-12T20:55:27.178448363Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.0\" returns image reference \"sha256:124793defc2ae544a3e0dcd1a225bff5166dbebc1bdacb41c4161b9c0c53425c\"" Nov 12 20:55:27.181069 containerd[1692]: time="2024-11-12T20:55:27.180738688Z" level=info msg="CreateContainer within sandbox \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Nov 12 20:55:27.221571 containerd[1692]: time="2024-11-12T20:55:27.221524732Z" level=info msg="CreateContainer within sandbox \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7\"" Nov 12 20:55:27.223562 containerd[1692]: time="2024-11-12T20:55:27.222116338Z" level=info msg="StartContainer for \"c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7\"" Nov 12 20:55:27.258295 systemd[1]: Started cri-containerd-c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7.scope - libcontainer container c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7. Nov 12 20:55:27.294594 containerd[1692]: time="2024-11-12T20:55:27.294441725Z" level=info msg="StartContainer for \"c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7\" returns successfully" Nov 12 20:55:28.715161 containerd[1692]: time="2024-11-12T20:55:28.715102884Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Nov 12 20:55:28.717217 systemd[1]: cri-containerd-c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7.scope: Deactivated successfully. Nov 12 20:55:28.739499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7-rootfs.mount: Deactivated successfully. Nov 12 20:55:28.763204 kubelet[3233]: E1112 20:55:28.763139 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.814433 3233 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.844791 3233 topology_manager.go:215] "Topology Admit Handler" podUID="cc7e1b18-e518-4260-a198-f799965c328d" podNamespace="kube-system" podName="coredns-7db6d8ff4d-9hddw" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.853308 3233 topology_manager.go:215] "Topology Admit Handler" podUID="9b2c03e2-3981-44eb-8347-9662269e0c09" podNamespace="kube-system" podName="coredns-7db6d8ff4d-km9bd" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.855516 3233 topology_manager.go:215] "Topology Admit Handler" podUID="af49ceee-a159-42af-9ac7-30f86ab527d2" podNamespace="calico-system" podName="calico-kube-controllers-5bf77db8d-lgc2s" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.860532 3233 topology_manager.go:215] "Topology Admit Handler" podUID="71fd5972-91e2-40bf-bb76-afa53f7f5d20" podNamespace="calico-apiserver" podName="calico-apiserver-777ccbcc85-jc8rd" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.860760 3233 topology_manager.go:215] "Topology Admit Handler" podUID="be3507c8-8c76-43ce-abb3-9bd37cc91cb6" podNamespace="calico-apiserver" podName="calico-apiserver-777ccbcc85-8d9fq" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.941162 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrt8t\" (UniqueName: \"kubernetes.io/projected/cc7e1b18-e518-4260-a198-f799965c328d-kube-api-access-jrt8t\") pod \"coredns-7db6d8ff4d-9hddw\" (UID: \"cc7e1b18-e518-4260-a198-f799965c328d\") " pod="kube-system/coredns-7db6d8ff4d-9hddw" Nov 12 20:55:29.246536 kubelet[3233]: I1112 20:55:28.941199 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwl2w\" (UniqueName: \"kubernetes.io/projected/9b2c03e2-3981-44eb-8347-9662269e0c09-kube-api-access-xwl2w\") pod \"coredns-7db6d8ff4d-km9bd\" (UID: \"9b2c03e2-3981-44eb-8347-9662269e0c09\") " pod="kube-system/coredns-7db6d8ff4d-km9bd" Nov 12 20:55:28.858008 systemd[1]: Created slice kubepods-burstable-podcc7e1b18_e518_4260_a198_f799965c328d.slice - libcontainer container kubepods-burstable-podcc7e1b18_e518_4260_a198_f799965c328d.slice. Nov 12 20:55:29.247122 kubelet[3233]: I1112 20:55:28.941230 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/af49ceee-a159-42af-9ac7-30f86ab527d2-tigera-ca-bundle\") pod \"calico-kube-controllers-5bf77db8d-lgc2s\" (UID: \"af49ceee-a159-42af-9ac7-30f86ab527d2\") " pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" Nov 12 20:55:29.247122 kubelet[3233]: I1112 20:55:28.941273 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/71fd5972-91e2-40bf-bb76-afa53f7f5d20-calico-apiserver-certs\") pod \"calico-apiserver-777ccbcc85-jc8rd\" (UID: \"71fd5972-91e2-40bf-bb76-afa53f7f5d20\") " pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" Nov 12 20:55:29.247122 kubelet[3233]: I1112 20:55:28.941305 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cc7e1b18-e518-4260-a198-f799965c328d-config-volume\") pod \"coredns-7db6d8ff4d-9hddw\" (UID: \"cc7e1b18-e518-4260-a198-f799965c328d\") " pod="kube-system/coredns-7db6d8ff4d-9hddw" Nov 12 20:55:29.247122 kubelet[3233]: I1112 20:55:28.941320 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qhfz\" (UniqueName: \"kubernetes.io/projected/71fd5972-91e2-40bf-bb76-afa53f7f5d20-kube-api-access-2qhfz\") pod \"calico-apiserver-777ccbcc85-jc8rd\" (UID: \"71fd5972-91e2-40bf-bb76-afa53f7f5d20\") " pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" Nov 12 20:55:29.247122 kubelet[3233]: I1112 20:55:28.941336 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jgkg\" (UniqueName: \"kubernetes.io/projected/af49ceee-a159-42af-9ac7-30f86ab527d2-kube-api-access-5jgkg\") pod \"calico-kube-controllers-5bf77db8d-lgc2s\" (UID: \"af49ceee-a159-42af-9ac7-30f86ab527d2\") " pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" Nov 12 20:55:28.869419 systemd[1]: Created slice kubepods-burstable-pod9b2c03e2_3981_44eb_8347_9662269e0c09.slice - libcontainer container kubepods-burstable-pod9b2c03e2_3981_44eb_8347_9662269e0c09.slice. Nov 12 20:55:29.247408 kubelet[3233]: I1112 20:55:28.941352 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9b2c03e2-3981-44eb-8347-9662269e0c09-config-volume\") pod \"coredns-7db6d8ff4d-km9bd\" (UID: \"9b2c03e2-3981-44eb-8347-9662269e0c09\") " pod="kube-system/coredns-7db6d8ff4d-km9bd" Nov 12 20:55:29.247408 kubelet[3233]: I1112 20:55:28.941373 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/be3507c8-8c76-43ce-abb3-9bd37cc91cb6-calico-apiserver-certs\") pod \"calico-apiserver-777ccbcc85-8d9fq\" (UID: \"be3507c8-8c76-43ce-abb3-9bd37cc91cb6\") " pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" Nov 12 20:55:29.247408 kubelet[3233]: I1112 20:55:28.941388 3233 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fg5cq\" (UniqueName: \"kubernetes.io/projected/be3507c8-8c76-43ce-abb3-9bd37cc91cb6-kube-api-access-fg5cq\") pod \"calico-apiserver-777ccbcc85-8d9fq\" (UID: \"be3507c8-8c76-43ce-abb3-9bd37cc91cb6\") " pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" Nov 12 20:55:28.882750 systemd[1]: Created slice kubepods-besteffort-podaf49ceee_a159_42af_9ac7_30f86ab527d2.slice - libcontainer container kubepods-besteffort-podaf49ceee_a159_42af_9ac7_30f86ab527d2.slice. Nov 12 20:55:28.891014 systemd[1]: Created slice kubepods-besteffort-pod71fd5972_91e2_40bf_bb76_afa53f7f5d20.slice - libcontainer container kubepods-besteffort-pod71fd5972_91e2_40bf_bb76_afa53f7f5d20.slice. Nov 12 20:55:28.898290 systemd[1]: Created slice kubepods-besteffort-podbe3507c8_8c76_43ce_abb3_9bd37cc91cb6.slice - libcontainer container kubepods-besteffort-podbe3507c8_8c76_43ce_abb3_9bd37cc91cb6.slice. Nov 12 20:55:29.551029 containerd[1692]: time="2024-11-12T20:55:29.550887479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hddw,Uid:cc7e1b18-e518-4260-a198-f799965c328d,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:29.552473 containerd[1692]: time="2024-11-12T20:55:29.552433795Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-8d9fq,Uid:be3507c8-8c76-43ce-abb3-9bd37cc91cb6,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:55:29.569241 containerd[1692]: time="2024-11-12T20:55:29.569201078Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km9bd,Uid:9b2c03e2-3981-44eb-8347-9662269e0c09,Namespace:kube-system,Attempt:0,}" Nov 12 20:55:29.576930 containerd[1692]: time="2024-11-12T20:55:29.576832961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-jc8rd,Uid:71fd5972-91e2-40bf-bb76-afa53f7f5d20,Namespace:calico-apiserver,Attempt:0,}" Nov 12 20:55:29.577174 containerd[1692]: time="2024-11-12T20:55:29.576835361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf77db8d-lgc2s,Uid:af49ceee-a159-42af-9ac7-30f86ab527d2,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:30.368150 containerd[1692]: time="2024-11-12T20:55:30.368086671Z" level=info msg="shim disconnected" id=c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7 namespace=k8s.io Nov 12 20:55:30.368150 containerd[1692]: time="2024-11-12T20:55:30.368137071Z" level=warning msg="cleaning up after shim disconnected" id=c8321e8d91b313298e0999ea5067f320c6d3016d9d04d4b3df6df3d9f6942cb7 namespace=k8s.io Nov 12 20:55:30.368150 containerd[1692]: time="2024-11-12T20:55:30.368146872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Nov 12 20:55:30.635980 containerd[1692]: time="2024-11-12T20:55:30.635048776Z" level=error msg="Failed to destroy network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.635980 containerd[1692]: time="2024-11-12T20:55:30.635448280Z" level=error msg="encountered an error cleaning up failed sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.635980 containerd[1692]: time="2024-11-12T20:55:30.635517081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-jc8rd,Uid:71fd5972-91e2-40bf-bb76-afa53f7f5d20,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.636254 kubelet[3233]: E1112 20:55:30.635773 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.636254 kubelet[3233]: E1112 20:55:30.636045 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" Nov 12 20:55:30.636254 kubelet[3233]: E1112 20:55:30.636106 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" Nov 12 20:55:30.636696 kubelet[3233]: E1112 20:55:30.636201 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-777ccbcc85-jc8rd_calico-apiserver(71fd5972-91e2-40bf-bb76-afa53f7f5d20)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-777ccbcc85-jc8rd_calico-apiserver(71fd5972-91e2-40bf-bb76-afa53f7f5d20)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" podUID="71fd5972-91e2-40bf-bb76-afa53f7f5d20" Nov 12 20:55:30.687370 containerd[1692]: time="2024-11-12T20:55:30.686103631Z" level=error msg="Failed to destroy network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.687370 containerd[1692]: time="2024-11-12T20:55:30.687070842Z" level=error msg="encountered an error cleaning up failed sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.687370 containerd[1692]: time="2024-11-12T20:55:30.687142643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hddw,Uid:cc7e1b18-e518-4260-a198-f799965c328d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.687615 kubelet[3233]: E1112 20:55:30.687395 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.687615 kubelet[3233]: E1112 20:55:30.687568 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hddw" Nov 12 20:55:30.688099 kubelet[3233]: E1112 20:55:30.687612 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-9hddw" Nov 12 20:55:30.688099 kubelet[3233]: E1112 20:55:30.687815 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-9hddw_kube-system(cc7e1b18-e518-4260-a198-f799965c328d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-9hddw_kube-system(cc7e1b18-e518-4260-a198-f799965c328d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hddw" podUID="cc7e1b18-e518-4260-a198-f799965c328d" Nov 12 20:55:30.721108 containerd[1692]: time="2024-11-12T20:55:30.720946310Z" level=error msg="Failed to destroy network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.721663 containerd[1692]: time="2024-11-12T20:55:30.721523617Z" level=error msg="encountered an error cleaning up failed sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.721663 containerd[1692]: time="2024-11-12T20:55:30.721606318Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-8d9fq,Uid:be3507c8-8c76-43ce-abb3-9bd37cc91cb6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.722679 kubelet[3233]: E1112 20:55:30.722205 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.722679 kubelet[3233]: E1112 20:55:30.722284 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" Nov 12 20:55:30.722679 kubelet[3233]: E1112 20:55:30.722311 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" Nov 12 20:55:30.722881 kubelet[3233]: E1112 20:55:30.722381 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-777ccbcc85-8d9fq_calico-apiserver(be3507c8-8c76-43ce-abb3-9bd37cc91cb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-777ccbcc85-8d9fq_calico-apiserver(be3507c8-8c76-43ce-abb3-9bd37cc91cb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" podUID="be3507c8-8c76-43ce-abb3-9bd37cc91cb6" Nov 12 20:55:30.746229 containerd[1692]: time="2024-11-12T20:55:30.745982783Z" level=error msg="Failed to destroy network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.746890 containerd[1692]: time="2024-11-12T20:55:30.746784992Z" level=error msg="encountered an error cleaning up failed sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.746890 containerd[1692]: time="2024-11-12T20:55:30.746861992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf77db8d-lgc2s,Uid:af49ceee-a159-42af-9ac7-30f86ab527d2,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.748936 kubelet[3233]: E1112 20:55:30.748153 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.748936 kubelet[3233]: E1112 20:55:30.748500 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" Nov 12 20:55:30.748936 kubelet[3233]: E1112 20:55:30.748530 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" Nov 12 20:55:30.749163 kubelet[3233]: E1112 20:55:30.748603 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5bf77db8d-lgc2s_calico-system(af49ceee-a159-42af-9ac7-30f86ab527d2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5bf77db8d-lgc2s_calico-system(af49ceee-a159-42af-9ac7-30f86ab527d2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" podUID="af49ceee-a159-42af-9ac7-30f86ab527d2" Nov 12 20:55:30.751344 containerd[1692]: time="2024-11-12T20:55:30.751295841Z" level=error msg="Failed to destroy network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.751658 containerd[1692]: time="2024-11-12T20:55:30.751623344Z" level=error msg="encountered an error cleaning up failed sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.751759 containerd[1692]: time="2024-11-12T20:55:30.751688445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km9bd,Uid:9b2c03e2-3981-44eb-8347-9662269e0c09,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.751984 kubelet[3233]: E1112 20:55:30.751929 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.752083 kubelet[3233]: E1112 20:55:30.752007 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km9bd" Nov 12 20:55:30.752083 kubelet[3233]: E1112 20:55:30.752032 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-km9bd" Nov 12 20:55:30.752184 kubelet[3233]: E1112 20:55:30.752094 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-km9bd_kube-system(9b2c03e2-3981-44eb-8347-9662269e0c09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-km9bd_kube-system(9b2c03e2-3981-44eb-8347-9662269e0c09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km9bd" podUID="9b2c03e2-3981-44eb-8347-9662269e0c09" Nov 12 20:55:30.769293 systemd[1]: Created slice kubepods-besteffort-pod71853af8_7114_4e11_9b62_8d92def4793d.slice - libcontainer container kubepods-besteffort-pod71853af8_7114_4e11_9b62_8d92def4793d.slice. Nov 12 20:55:30.771855 containerd[1692]: time="2024-11-12T20:55:30.771804864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jt7lt,Uid:71853af8-7114-4e11-9b62-8d92def4793d,Namespace:calico-system,Attempt:0,}" Nov 12 20:55:30.863606 containerd[1692]: time="2024-11-12T20:55:30.863543662Z" level=error msg="Failed to destroy network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.863925 containerd[1692]: time="2024-11-12T20:55:30.863890066Z" level=error msg="encountered an error cleaning up failed sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.864039 containerd[1692]: time="2024-11-12T20:55:30.863985667Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jt7lt,Uid:71853af8-7114-4e11-9b62-8d92def4793d,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.864319 kubelet[3233]: E1112 20:55:30.864268 3233 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:30.864421 kubelet[3233]: E1112 20:55:30.864336 3233 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:30.864421 kubelet[3233]: E1112 20:55:30.864363 3233 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jt7lt" Nov 12 20:55:30.864568 kubelet[3233]: E1112 20:55:30.864433 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jt7lt_calico-system(71853af8-7114-4e11-9b62-8d92def4793d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jt7lt_calico-system(71853af8-7114-4e11-9b62-8d92def4793d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:30.911834 kubelet[3233]: I1112 20:55:30.911700 3233 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:30.914204 containerd[1692]: time="2024-11-12T20:55:30.914100612Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\"" Nov 12 20:55:30.915592 containerd[1692]: time="2024-11-12T20:55:30.914786420Z" level=info msg="StopPodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\"" Nov 12 20:55:30.915592 containerd[1692]: time="2024-11-12T20:55:30.915009122Z" level=info msg="Ensure that sandbox 1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d in task-service has been cleanup successfully" Nov 12 20:55:30.917717 kubelet[3233]: I1112 20:55:30.916161 3233 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:30.920396 containerd[1692]: time="2024-11-12T20:55:30.920105078Z" level=info msg="StopPodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\"" Nov 12 20:55:30.922225 containerd[1692]: time="2024-11-12T20:55:30.922197500Z" level=info msg="Ensure that sandbox 4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb in task-service has been cleanup successfully" Nov 12 20:55:30.926154 kubelet[3233]: I1112 20:55:30.924860 3233 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:30.926253 containerd[1692]: time="2024-11-12T20:55:30.925708739Z" level=info msg="StopPodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\"" Nov 12 20:55:30.926253 containerd[1692]: time="2024-11-12T20:55:30.925878140Z" level=info msg="Ensure that sandbox d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd in task-service has been cleanup successfully" Nov 12 20:55:30.933461 kubelet[3233]: I1112 20:55:30.933303 3233 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:30.942144 containerd[1692]: time="2024-11-12T20:55:30.942108717Z" level=info msg="StopPodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\"" Nov 12 20:55:30.942336 containerd[1692]: time="2024-11-12T20:55:30.942305619Z" level=info msg="Ensure that sandbox 3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8 in task-service has been cleanup successfully" Nov 12 20:55:30.949200 kubelet[3233]: I1112 20:55:30.949175 3233 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:30.959350 containerd[1692]: time="2024-11-12T20:55:30.958440295Z" level=info msg="StopPodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\"" Nov 12 20:55:30.959350 containerd[1692]: time="2024-11-12T20:55:30.958670597Z" level=info msg="Ensure that sandbox 6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c in task-service has been cleanup successfully" Nov 12 20:55:30.965176 kubelet[3233]: I1112 20:55:30.965148 3233 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:30.969561 containerd[1692]: time="2024-11-12T20:55:30.969338013Z" level=info msg="StopPodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\"" Nov 12 20:55:30.970354 containerd[1692]: time="2024-11-12T20:55:30.970319224Z" level=info msg="Ensure that sandbox d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb in task-service has been cleanup successfully" Nov 12 20:55:31.040418 containerd[1692]: time="2024-11-12T20:55:31.040355586Z" level=error msg="StopPodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" failed" error="failed to destroy network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:31.041226 kubelet[3233]: E1112 20:55:31.041000 3233 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:31.041226 kubelet[3233]: E1112 20:55:31.041092 3233 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d"} Nov 12 20:55:31.041226 kubelet[3233]: E1112 20:55:31.041192 3233 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cc7e1b18-e518-4260-a198-f799965c328d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:31.043985 kubelet[3233]: E1112 20:55:31.041646 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cc7e1b18-e518-4260-a198-f799965c328d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-9hddw" podUID="cc7e1b18-e518-4260-a198-f799965c328d" Nov 12 20:55:31.044136 containerd[1692]: time="2024-11-12T20:55:31.043884725Z" level=error msg="StopPodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" failed" error="failed to destroy network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:31.044605 kubelet[3233]: E1112 20:55:31.044571 3233 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:31.044773 kubelet[3233]: E1112 20:55:31.044750 3233 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb"} Nov 12 20:55:31.044904 kubelet[3233]: E1112 20:55:31.044886 3233 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"af49ceee-a159-42af-9ac7-30f86ab527d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:31.045167 kubelet[3233]: E1112 20:55:31.045130 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"af49ceee-a159-42af-9ac7-30f86ab527d2\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" podUID="af49ceee-a159-42af-9ac7-30f86ab527d2" Nov 12 20:55:31.056527 containerd[1692]: time="2024-11-12T20:55:31.056457661Z" level=error msg="StopPodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" failed" error="failed to destroy network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:31.057529 kubelet[3233]: E1112 20:55:31.057488 3233 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:31.058841 kubelet[3233]: E1112 20:55:31.058811 3233 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd"} Nov 12 20:55:31.059033 kubelet[3233]: E1112 20:55:31.059004 3233 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71853af8-7114-4e11-9b62-8d92def4793d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:31.059213 kubelet[3233]: E1112 20:55:31.059184 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71853af8-7114-4e11-9b62-8d92def4793d\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jt7lt" podUID="71853af8-7114-4e11-9b62-8d92def4793d" Nov 12 20:55:31.067255 containerd[1692]: time="2024-11-12T20:55:31.067186378Z" level=error msg="StopPodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" failed" error="failed to destroy network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:31.067700 kubelet[3233]: E1112 20:55:31.067653 3233 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:31.067922 kubelet[3233]: E1112 20:55:31.067900 3233 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c"} Nov 12 20:55:31.068076 kubelet[3233]: E1112 20:55:31.068056 3233 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9b2c03e2-3981-44eb-8347-9662269e0c09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:31.068262 kubelet[3233]: E1112 20:55:31.068237 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9b2c03e2-3981-44eb-8347-9662269e0c09\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-km9bd" podUID="9b2c03e2-3981-44eb-8347-9662269e0c09" Nov 12 20:55:31.070640 containerd[1692]: time="2024-11-12T20:55:31.070565515Z" level=error msg="StopPodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" failed" error="failed to destroy network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:31.070873 kubelet[3233]: E1112 20:55:31.070841 3233 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:31.070997 kubelet[3233]: E1112 20:55:31.070887 3233 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8"} Nov 12 20:55:31.070997 kubelet[3233]: E1112 20:55:31.070927 3233 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"71fd5972-91e2-40bf-bb76-afa53f7f5d20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:31.070997 kubelet[3233]: E1112 20:55:31.070968 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"71fd5972-91e2-40bf-bb76-afa53f7f5d20\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" podUID="71fd5972-91e2-40bf-bb76-afa53f7f5d20" Nov 12 20:55:31.073346 containerd[1692]: time="2024-11-12T20:55:31.073311345Z" level=error msg="StopPodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" failed" error="failed to destroy network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Nov 12 20:55:31.073498 kubelet[3233]: E1112 20:55:31.073475 3233 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:31.073567 kubelet[3233]: E1112 20:55:31.073510 3233 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb"} Nov 12 20:55:31.073567 kubelet[3233]: E1112 20:55:31.073545 3233 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"be3507c8-8c76-43ce-abb3-9bd37cc91cb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Nov 12 20:55:31.073669 kubelet[3233]: E1112 20:55:31.073573 3233 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"be3507c8-8c76-43ce-abb3-9bd37cc91cb6\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" podUID="be3507c8-8c76-43ce-abb3-9bd37cc91cb6" Nov 12 20:55:31.501669 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb-shm.mount: Deactivated successfully. Nov 12 20:55:31.501779 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8-shm.mount: Deactivated successfully. Nov 12 20:55:31.501855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb-shm.mount: Deactivated successfully. Nov 12 20:55:31.501981 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d-shm.mount: Deactivated successfully. Nov 12 20:55:38.719600 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3232484082.mount: Deactivated successfully. Nov 12 20:55:38.766095 containerd[1692]: time="2024-11-12T20:55:38.766041011Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.775500 containerd[1692]: time="2024-11-12T20:55:38.775436629Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.0: active requests=0, bytes read=140580710" Nov 12 20:55:38.779852 containerd[1692]: time="2024-11-12T20:55:38.779791883Z" level=info msg="ImageCreate event name:\"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.787390 containerd[1692]: time="2024-11-12T20:55:38.787323378Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:38.788102 containerd[1692]: time="2024-11-12T20:55:38.787901785Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.0\" with image id \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:0761a4b4a20aefdf788f2b42a221bfcfe926a474152b74fbe091d847f5d823d7\", size \"140580572\" in 7.873753172s" Nov 12 20:55:38.788102 containerd[1692]: time="2024-11-12T20:55:38.787949586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.0\" returns image reference \"sha256:df7e265d5ccd035f529156d2ef608d879200d07c1539ca9cac539da91478bc9f\"" Nov 12 20:55:38.804983 containerd[1692]: time="2024-11-12T20:55:38.803117477Z" level=info msg="CreateContainer within sandbox \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Nov 12 20:55:38.850349 containerd[1692]: time="2024-11-12T20:55:38.850294970Z" level=info msg="CreateContainer within sandbox \"84c6bd2df6e640988c09d2cfa2e4e069c6781a2a80404cdfb96f8d2482f11f1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e3becbc29fa28f19cdbd4ab3f765332d237ac083aae64f19485a0f8e7c4cb2fd\"" Nov 12 20:55:38.852223 containerd[1692]: time="2024-11-12T20:55:38.851072279Z" level=info msg="StartContainer for \"e3becbc29fa28f19cdbd4ab3f765332d237ac083aae64f19485a0f8e7c4cb2fd\"" Nov 12 20:55:38.881152 systemd[1]: Started cri-containerd-e3becbc29fa28f19cdbd4ab3f765332d237ac083aae64f19485a0f8e7c4cb2fd.scope - libcontainer container e3becbc29fa28f19cdbd4ab3f765332d237ac083aae64f19485a0f8e7c4cb2fd. Nov 12 20:55:38.910980 containerd[1692]: time="2024-11-12T20:55:38.910664828Z" level=info msg="StartContainer for \"e3becbc29fa28f19cdbd4ab3f765332d237ac083aae64f19485a0f8e7c4cb2fd\" returns successfully" Nov 12 20:55:39.012523 kubelet[3233]: I1112 20:55:39.011779 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fnnmf" podStartSLOduration=1.646313806 podStartE2EDuration="25.011759199s" podCreationTimestamp="2024-11-12 20:55:14 +0000 UTC" firstStartedPulling="2024-11-12 20:55:15.423434905 +0000 UTC m=+22.759056402" lastFinishedPulling="2024-11-12 20:55:38.788880398 +0000 UTC m=+46.124501795" observedRunningTime="2024-11-12 20:55:39.011521996 +0000 UTC m=+46.347143393" watchObservedRunningTime="2024-11-12 20:55:39.011759199 +0000 UTC m=+46.347380596" Nov 12 20:55:39.240705 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Nov 12 20:55:39.241091 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Nov 12 20:55:39.675703 kubelet[3233]: I1112 20:55:39.674692 3233 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:40.842002 kernel: bpftool[4551]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Nov 12 20:55:41.099739 systemd-networkd[1480]: vxlan.calico: Link UP Nov 12 20:55:41.099751 systemd-networkd[1480]: vxlan.calico: Gained carrier Nov 12 20:55:42.752182 systemd-networkd[1480]: vxlan.calico: Gained IPv6LL Nov 12 20:55:42.765623 containerd[1692]: time="2024-11-12T20:55:42.765222771Z" level=info msg="StopPodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\"" Nov 12 20:55:42.767402 containerd[1692]: time="2024-11-12T20:55:42.765712578Z" level=info msg="StopPodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\"" Nov 12 20:55:42.768361 containerd[1692]: time="2024-11-12T20:55:42.768257510Z" level=info msg="StopPodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\"" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.869 [INFO][4669] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.869 [INFO][4669] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" iface="eth0" netns="/var/run/netns/cni-a02a8698-e1f4-9ebb-55eb-f7f3fb9c8f6a" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.869 [INFO][4669] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" iface="eth0" netns="/var/run/netns/cni-a02a8698-e1f4-9ebb-55eb-f7f3fb9c8f6a" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.869 [INFO][4669] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" iface="eth0" netns="/var/run/netns/cni-a02a8698-e1f4-9ebb-55eb-f7f3fb9c8f6a" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.869 [INFO][4669] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.869 [INFO][4669] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.910 [INFO][4684] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.912 [INFO][4684] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.912 [INFO][4684] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.919 [WARNING][4684] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.919 [INFO][4684] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.921 [INFO][4684] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:42.928677 containerd[1692]: 2024-11-12 20:55:42.926 [INFO][4669] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:42.932658 containerd[1692]: time="2024-11-12T20:55:42.930867553Z" level=info msg="TearDown network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" successfully" Nov 12 20:55:42.932658 containerd[1692]: time="2024-11-12T20:55:42.930908354Z" level=info msg="StopPodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" returns successfully" Nov 12 20:55:42.934934 systemd[1]: run-netns-cni\x2da02a8698\x2de1f4\x2d9ebb\x2d55eb\x2df7f3fb9c8f6a.mount: Deactivated successfully. Nov 12 20:55:42.938048 containerd[1692]: time="2024-11-12T20:55:42.934930104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-jc8rd,Uid:71fd5972-91e2-40bf-bb76-afa53f7f5d20,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.862 [INFO][4668] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.862 [INFO][4668] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" iface="eth0" netns="/var/run/netns/cni-186704c0-0aeb-76b0-0b35-8ffff333303c" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.863 [INFO][4668] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" iface="eth0" netns="/var/run/netns/cni-186704c0-0aeb-76b0-0b35-8ffff333303c" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.864 [INFO][4668] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" iface="eth0" netns="/var/run/netns/cni-186704c0-0aeb-76b0-0b35-8ffff333303c" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.864 [INFO][4668] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.864 [INFO][4668] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.920 [INFO][4683] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.920 [INFO][4683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.921 [INFO][4683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.929 [WARNING][4683] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.929 [INFO][4683] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.937 [INFO][4683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:42.942869 containerd[1692]: 2024-11-12 20:55:42.940 [INFO][4668] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:42.943834 containerd[1692]: time="2024-11-12T20:55:42.943019706Z" level=info msg="TearDown network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" successfully" Nov 12 20:55:42.943834 containerd[1692]: time="2024-11-12T20:55:42.943049506Z" level=info msg="StopPodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" returns successfully" Nov 12 20:55:42.945981 containerd[1692]: time="2024-11-12T20:55:42.944737527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km9bd,Uid:9b2c03e2-3981-44eb-8347-9662269e0c09,Namespace:kube-system,Attempt:1,}" Nov 12 20:55:42.949137 systemd[1]: run-netns-cni\x2d186704c0\x2d0aeb\x2d76b0\x2d0b35\x2d8ffff333303c.mount: Deactivated successfully. Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.887 [INFO][4661] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.891 [INFO][4661] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" iface="eth0" netns="/var/run/netns/cni-f8f5a5c9-7530-ad17-0fe8-7639a4368d8e" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.891 [INFO][4661] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" iface="eth0" netns="/var/run/netns/cni-f8f5a5c9-7530-ad17-0fe8-7639a4368d8e" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.891 [INFO][4661] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" iface="eth0" netns="/var/run/netns/cni-f8f5a5c9-7530-ad17-0fe8-7639a4368d8e" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.891 [INFO][4661] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.891 [INFO][4661] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.952 [INFO][4693] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.952 [INFO][4693] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.952 [INFO][4693] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.958 [WARNING][4693] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.958 [INFO][4693] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.960 [INFO][4693] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:42.963936 containerd[1692]: 2024-11-12 20:55:42.962 [INFO][4661] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:42.969110 containerd[1692]: time="2024-11-12T20:55:42.966505601Z" level=info msg="TearDown network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" successfully" Nov 12 20:55:42.969110 containerd[1692]: time="2024-11-12T20:55:42.966556402Z" level=info msg="StopPodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" returns successfully" Nov 12 20:55:42.969110 containerd[1692]: time="2024-11-12T20:55:42.968723829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-8d9fq,Uid:be3507c8-8c76-43ce-abb3-9bd37cc91cb6,Namespace:calico-apiserver,Attempt:1,}" Nov 12 20:55:42.967682 systemd[1]: run-netns-cni\x2df8f5a5c9\x2d7530\x2dad17\x2d0fe8\x2d7639a4368d8e.mount: Deactivated successfully. Nov 12 20:55:43.195537 systemd-networkd[1480]: cali7664069681b: Link UP Nov 12 20:55:43.195796 systemd-networkd[1480]: cali7664069681b: Gained carrier Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.048 [INFO][4702] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0 calico-apiserver-777ccbcc85- calico-apiserver 71fd5972-91e2-40bf-bb76-afa53f7f5d20 800 0 2024-11-12 20:55:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:777ccbcc85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-c73ec1ae7a calico-apiserver-777ccbcc85-jc8rd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali7664069681b [] []}} ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.048 [INFO][4702] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.127 [INFO][4732] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" HandleID="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.140 [INFO][4732] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" HandleID="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051f30), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-c73ec1ae7a", "pod":"calico-apiserver-777ccbcc85-jc8rd", "timestamp":"2024-11-12 20:55:43.127525993 +0000 UTC"}, Hostname:"ci-4081.2.0-a-c73ec1ae7a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.140 [INFO][4732] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.140 [INFO][4732] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.140 [INFO][4732] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-c73ec1ae7a' Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.142 [INFO][4732] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.149 [INFO][4732] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.156 [INFO][4732] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.158 [INFO][4732] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.161 [INFO][4732] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.161 [INFO][4732] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.163 [INFO][4732] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0 Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.173 [INFO][4732] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.183 [INFO][4732] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.1/26] block=192.168.110.0/26 handle="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.183 [INFO][4732] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.1/26] handle="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.184 [INFO][4732] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:43.222075 containerd[1692]: 2024-11-12 20:55:43.184 [INFO][4732] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.1/26] IPv6=[] ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" HandleID="k8s-pod-network.3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.223640 containerd[1692]: 2024-11-12 20:55:43.188 [INFO][4702] cni-plugin/k8s.go 386: Populated endpoint ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"71fd5972-91e2-40bf-bb76-afa53f7f5d20", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"", Pod:"calico-apiserver-777ccbcc85-jc8rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7664069681b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:43.223640 containerd[1692]: 2024-11-12 20:55:43.188 [INFO][4702] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.1/32] ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.223640 containerd[1692]: 2024-11-12 20:55:43.188 [INFO][4702] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7664069681b ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.223640 containerd[1692]: 2024-11-12 20:55:43.193 [INFO][4702] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.223640 containerd[1692]: 2024-11-12 20:55:43.193 [INFO][4702] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"71fd5972-91e2-40bf-bb76-afa53f7f5d20", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0", Pod:"calico-apiserver-777ccbcc85-jc8rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7664069681b", MAC:"b2:cc:2f:34:74:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:43.223640 containerd[1692]: 2024-11-12 20:55:43.210 [INFO][4702] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-jc8rd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:43.260271 systemd-networkd[1480]: cali96ebb4ddc94: Link UP Nov 12 20:55:43.263888 systemd-networkd[1480]: cali96ebb4ddc94: Gained carrier Nov 12 20:55:43.297596 containerd[1692]: time="2024-11-12T20:55:43.297057345Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:43.297596 containerd[1692]: time="2024-11-12T20:55:43.297130146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:43.297596 containerd[1692]: time="2024-11-12T20:55:43.297175847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:43.297596 containerd[1692]: time="2024-11-12T20:55:43.297345349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.081 [INFO][4711] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0 coredns-7db6d8ff4d- kube-system 9b2c03e2-3981-44eb-8347-9662269e0c09 799 0 2024-11-12 20:55:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-c73ec1ae7a coredns-7db6d8ff4d-km9bd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali96ebb4ddc94 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.081 [INFO][4711] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.161 [INFO][4739] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" HandleID="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.176 [INFO][4739] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" HandleID="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ed1f0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-c73ec1ae7a", "pod":"coredns-7db6d8ff4d-km9bd", "timestamp":"2024-11-12 20:55:43.161037799 +0000 UTC"}, Hostname:"ci-4081.2.0-a-c73ec1ae7a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.176 [INFO][4739] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.184 [INFO][4739] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.184 [INFO][4739] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-c73ec1ae7a' Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.187 [INFO][4739] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.197 [INFO][4739] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.208 [INFO][4739] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.213 [INFO][4739] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.217 [INFO][4739] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.217 [INFO][4739] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.218 [INFO][4739] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7 Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.226 [INFO][4739] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.242 [INFO][4739] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.2/26] block=192.168.110.0/26 handle="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.242 [INFO][4739] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.2/26] handle="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.242 [INFO][4739] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:43.305895 containerd[1692]: 2024-11-12 20:55:43.242 [INFO][4739] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.2/26] IPv6=[] ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" HandleID="k8s-pod-network.4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.306813 containerd[1692]: 2024-11-12 20:55:43.247 [INFO][4711] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b2c03e2-3981-44eb-8347-9662269e0c09", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"", Pod:"coredns-7db6d8ff4d-km9bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96ebb4ddc94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:43.306813 containerd[1692]: 2024-11-12 20:55:43.247 [INFO][4711] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.2/32] ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.306813 containerd[1692]: 2024-11-12 20:55:43.248 [INFO][4711] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali96ebb4ddc94 ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.306813 containerd[1692]: 2024-11-12 20:55:43.267 [INFO][4711] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.306813 containerd[1692]: 2024-11-12 20:55:43.268 [INFO][4711] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b2c03e2-3981-44eb-8347-9662269e0c09", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7", Pod:"coredns-7db6d8ff4d-km9bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96ebb4ddc94", MAC:"56:f9:02:07:d0:0d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:43.306813 containerd[1692]: 2024-11-12 20:55:43.300 [INFO][4711] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7" Namespace="kube-system" Pod="coredns-7db6d8ff4d-km9bd" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:43.346487 systemd[1]: Started cri-containerd-3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0.scope - libcontainer container 3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0. Nov 12 20:55:43.352106 systemd-networkd[1480]: cali0f53dc0ce1b: Link UP Nov 12 20:55:43.352359 systemd-networkd[1480]: cali0f53dc0ce1b: Gained carrier Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.106 [INFO][4722] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0 calico-apiserver-777ccbcc85- calico-apiserver be3507c8-8c76-43ce-abb3-9bd37cc91cb6 801 0 2024-11-12 20:55:14 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:777ccbcc85 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4081.2.0-a-c73ec1ae7a calico-apiserver-777ccbcc85-8d9fq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0f53dc0ce1b [] []}} ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.107 [INFO][4722] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.179 [INFO][4747] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" HandleID="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.193 [INFO][4747] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" HandleID="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000e2bf0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4081.2.0-a-c73ec1ae7a", "pod":"calico-apiserver-777ccbcc85-8d9fq", "timestamp":"2024-11-12 20:55:43.179100417 +0000 UTC"}, Hostname:"ci-4081.2.0-a-c73ec1ae7a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.195 [INFO][4747] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.244 [INFO][4747] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.244 [INFO][4747] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-c73ec1ae7a' Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.249 [INFO][4747] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.265 [INFO][4747] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.282 [INFO][4747] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.287 [INFO][4747] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.303 [INFO][4747] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.303 [INFO][4747] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.309 [INFO][4747] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74 Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.322 [INFO][4747] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.341 [INFO][4747] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.3/26] block=192.168.110.0/26 handle="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.342 [INFO][4747] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.3/26] handle="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.342 [INFO][4747] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:43.378340 containerd[1692]: 2024-11-12 20:55:43.342 [INFO][4747] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.3/26] IPv6=[] ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" HandleID="k8s-pod-network.f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.379404 containerd[1692]: 2024-11-12 20:55:43.345 [INFO][4722] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3507c8-8c76-43ce-abb3-9bd37cc91cb6", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"", Pod:"calico-apiserver-777ccbcc85-8d9fq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f53dc0ce1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:43.379404 containerd[1692]: 2024-11-12 20:55:43.345 [INFO][4722] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.3/32] ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.379404 containerd[1692]: 2024-11-12 20:55:43.345 [INFO][4722] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0f53dc0ce1b ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.379404 containerd[1692]: 2024-11-12 20:55:43.349 [INFO][4722] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.379404 containerd[1692]: 2024-11-12 20:55:43.351 [INFO][4722] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3507c8-8c76-43ce-abb3-9bd37cc91cb6", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74", Pod:"calico-apiserver-777ccbcc85-8d9fq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f53dc0ce1b", MAC:"0a:62:fb:7f:3f:75", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:43.379404 containerd[1692]: 2024-11-12 20:55:43.376 [INFO][4722] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74" Namespace="calico-apiserver" Pod="calico-apiserver-777ccbcc85-8d9fq" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:43.398131 containerd[1692]: time="2024-11-12T20:55:43.398034768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:43.398131 containerd[1692]: time="2024-11-12T20:55:43.398117369Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:43.398131 containerd[1692]: time="2024-11-12T20:55:43.398144169Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:43.398954 containerd[1692]: time="2024-11-12T20:55:43.398797377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:43.436988 containerd[1692]: time="2024-11-12T20:55:43.435613423Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:43.436988 containerd[1692]: time="2024-11-12T20:55:43.435675223Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:43.436988 containerd[1692]: time="2024-11-12T20:55:43.435706924Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:43.436988 containerd[1692]: time="2024-11-12T20:55:43.435836925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:43.437500 systemd[1]: Started cri-containerd-4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7.scope - libcontainer container 4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7. Nov 12 20:55:43.472090 systemd[1]: Started cri-containerd-f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74.scope - libcontainer container f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74. Nov 12 20:55:43.503803 containerd[1692]: time="2024-11-12T20:55:43.503715147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-jc8rd,Uid:71fd5972-91e2-40bf-bb76-afa53f7f5d20,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0\"" Nov 12 20:55:43.510985 containerd[1692]: time="2024-11-12T20:55:43.510408028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:55:43.535727 containerd[1692]: time="2024-11-12T20:55:43.535683434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-km9bd,Uid:9b2c03e2-3981-44eb-8347-9662269e0c09,Namespace:kube-system,Attempt:1,} returns sandbox id \"4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7\"" Nov 12 20:55:43.541223 containerd[1692]: time="2024-11-12T20:55:43.541054899Z" level=info msg="CreateContainer within sandbox \"4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:55:43.558729 containerd[1692]: time="2024-11-12T20:55:43.558661412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-777ccbcc85-8d9fq,Uid:be3507c8-8c76-43ce-abb3-9bd37cc91cb6,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74\"" Nov 12 20:55:43.576744 containerd[1692]: time="2024-11-12T20:55:43.576693631Z" level=info msg="CreateContainer within sandbox \"4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7eb2b5d7e01ba242dea1308971fe98020eafb84242c50a06d164591f97907df8\"" Nov 12 20:55:43.577477 containerd[1692]: time="2024-11-12T20:55:43.577449840Z" level=info msg="StartContainer for \"7eb2b5d7e01ba242dea1308971fe98020eafb84242c50a06d164591f97907df8\"" Nov 12 20:55:43.608165 systemd[1]: Started cri-containerd-7eb2b5d7e01ba242dea1308971fe98020eafb84242c50a06d164591f97907df8.scope - libcontainer container 7eb2b5d7e01ba242dea1308971fe98020eafb84242c50a06d164591f97907df8. Nov 12 20:55:43.640306 containerd[1692]: time="2024-11-12T20:55:43.640125398Z" level=info msg="StartContainer for \"7eb2b5d7e01ba242dea1308971fe98020eafb84242c50a06d164591f97907df8\" returns successfully" Nov 12 20:55:43.763896 containerd[1692]: time="2024-11-12T20:55:43.763746895Z" level=info msg="StopPodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\"" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.829 [INFO][4968] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.829 [INFO][4968] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" iface="eth0" netns="/var/run/netns/cni-5f62f120-d318-79b9-d612-e66b63e34e4d" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.836 [INFO][4968] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" iface="eth0" netns="/var/run/netns/cni-5f62f120-d318-79b9-d612-e66b63e34e4d" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.836 [INFO][4968] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" iface="eth0" netns="/var/run/netns/cni-5f62f120-d318-79b9-d612-e66b63e34e4d" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.836 [INFO][4968] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.837 [INFO][4968] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.879 [INFO][4975] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.880 [INFO][4975] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.880 [INFO][4975] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.905 [WARNING][4975] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.905 [INFO][4975] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.915 [INFO][4975] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:43.919114 containerd[1692]: 2024-11-12 20:55:43.917 [INFO][4968] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:43.921652 containerd[1692]: time="2024-11-12T20:55:43.919250177Z" level=info msg="TearDown network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" successfully" Nov 12 20:55:43.921652 containerd[1692]: time="2024-11-12T20:55:43.919283678Z" level=info msg="StopPodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" returns successfully" Nov 12 20:55:43.923266 containerd[1692]: time="2024-11-12T20:55:43.923200725Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf77db8d-lgc2s,Uid:af49ceee-a159-42af-9ac7-30f86ab527d2,Namespace:calico-system,Attempt:1,}" Nov 12 20:55:43.946201 systemd[1]: run-netns-cni\x2d5f62f120\x2dd318\x2d79b9\x2dd612\x2de66b63e34e4d.mount: Deactivated successfully. Nov 12 20:55:44.022668 kubelet[3233]: I1112 20:55:44.022502 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-km9bd" podStartSLOduration=38.022480327 podStartE2EDuration="38.022480327s" podCreationTimestamp="2024-11-12 20:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:44.019952296 +0000 UTC m=+51.355573693" watchObservedRunningTime="2024-11-12 20:55:44.022480327 +0000 UTC m=+51.358101724" Nov 12 20:55:44.117116 systemd-networkd[1480]: calif9432c011cb: Link UP Nov 12 20:55:44.118146 systemd-networkd[1480]: calif9432c011cb: Gained carrier Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:43.997 [INFO][4982] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0 calico-kube-controllers-5bf77db8d- calico-system af49ceee-a159-42af-9ac7-30f86ab527d2 817 0 2024-11-12 20:55:15 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5bf77db8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4081.2.0-a-c73ec1ae7a calico-kube-controllers-5bf77db8d-lgc2s eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calif9432c011cb [] []}} ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:43.997 [INFO][4982] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.057 [INFO][4993] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" HandleID="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.077 [INFO][4993] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" HandleID="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318dc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-c73ec1ae7a", "pod":"calico-kube-controllers-5bf77db8d-lgc2s", "timestamp":"2024-11-12 20:55:44.056886944 +0000 UTC"}, Hostname:"ci-4081.2.0-a-c73ec1ae7a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.077 [INFO][4993] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.078 [INFO][4993] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.078 [INFO][4993] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-c73ec1ae7a' Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.080 [INFO][4993] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.085 [INFO][4993] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.089 [INFO][4993] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.091 [INFO][4993] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.095 [INFO][4993] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.095 [INFO][4993] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.097 [INFO][4993] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878 Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.102 [INFO][4993] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.109 [INFO][4993] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.4/26] block=192.168.110.0/26 handle="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.109 [INFO][4993] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.4/26] handle="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.109 [INFO][4993] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:44.139280 containerd[1692]: 2024-11-12 20:55:44.109 [INFO][4993] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.4/26] IPv6=[] ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" HandleID="k8s-pod-network.74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.140297 containerd[1692]: 2024-11-12 20:55:44.112 [INFO][4982] cni-plugin/k8s.go 386: Populated endpoint ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0", GenerateName:"calico-kube-controllers-5bf77db8d-", Namespace:"calico-system", SelfLink:"", UID:"af49ceee-a159-42af-9ac7-30f86ab527d2", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf77db8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"", Pod:"calico-kube-controllers-5bf77db8d-lgc2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9432c011cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:44.140297 containerd[1692]: 2024-11-12 20:55:44.112 [INFO][4982] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.4/32] ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.140297 containerd[1692]: 2024-11-12 20:55:44.112 [INFO][4982] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif9432c011cb ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.140297 containerd[1692]: 2024-11-12 20:55:44.118 [INFO][4982] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.140297 containerd[1692]: 2024-11-12 20:55:44.118 [INFO][4982] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0", GenerateName:"calico-kube-controllers-5bf77db8d-", Namespace:"calico-system", SelfLink:"", UID:"af49ceee-a159-42af-9ac7-30f86ab527d2", ResourceVersion:"817", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf77db8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878", Pod:"calico-kube-controllers-5bf77db8d-lgc2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9432c011cb", MAC:"32:aa:ca:91:95:d4", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:44.140297 containerd[1692]: 2024-11-12 20:55:44.135 [INFO][4982] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878" Namespace="calico-system" Pod="calico-kube-controllers-5bf77db8d-lgc2s" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:44.172321 containerd[1692]: time="2024-11-12T20:55:44.172005037Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:44.172321 containerd[1692]: time="2024-11-12T20:55:44.172067038Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:44.172321 containerd[1692]: time="2024-11-12T20:55:44.172101038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.172321 containerd[1692]: time="2024-11-12T20:55:44.172205140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:44.196156 systemd[1]: Started cri-containerd-74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878.scope - libcontainer container 74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878. Nov 12 20:55:44.248507 containerd[1692]: time="2024-11-12T20:55:44.248435762Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5bf77db8d-lgc2s,Uid:af49ceee-a159-42af-9ac7-30f86ab527d2,Namespace:calico-system,Attempt:1,} returns sandbox id \"74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878\"" Nov 12 20:55:44.766153 containerd[1692]: time="2024-11-12T20:55:44.765709324Z" level=info msg="StopPodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\"" Nov 12 20:55:44.800196 systemd-networkd[1480]: cali96ebb4ddc94: Gained IPv6LL Nov 12 20:55:44.864151 systemd-networkd[1480]: cali7664069681b: Gained IPv6LL Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.829 [INFO][5074] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.830 [INFO][5074] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" iface="eth0" netns="/var/run/netns/cni-4a2a0ed4-a867-e7f9-79a2-28a03a4664e9" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.830 [INFO][5074] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" iface="eth0" netns="/var/run/netns/cni-4a2a0ed4-a867-e7f9-79a2-28a03a4664e9" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.831 [INFO][5074] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" iface="eth0" netns="/var/run/netns/cni-4a2a0ed4-a867-e7f9-79a2-28a03a4664e9" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.831 [INFO][5074] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.831 [INFO][5074] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.861 [INFO][5080] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.861 [INFO][5080] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.861 [INFO][5080] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.867 [WARNING][5080] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.867 [INFO][5080] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.868 [INFO][5080] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:44.870487 containerd[1692]: 2024-11-12 20:55:44.869 [INFO][5074] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:44.871419 containerd[1692]: time="2024-11-12T20:55:44.870725596Z" level=info msg="TearDown network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" successfully" Nov 12 20:55:44.871419 containerd[1692]: time="2024-11-12T20:55:44.870766196Z" level=info msg="StopPodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" returns successfully" Nov 12 20:55:44.871634 containerd[1692]: time="2024-11-12T20:55:44.871600506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hddw,Uid:cc7e1b18-e518-4260-a198-f799965c328d,Namespace:kube-system,Attempt:1,}" Nov 12 20:55:44.936339 systemd[1]: run-netns-cni\x2d4a2a0ed4\x2da867\x2de7f9\x2d79a2\x2d28a03a4664e9.mount: Deactivated successfully. Nov 12 20:55:45.030019 systemd-networkd[1480]: calif7c39d9d84a: Link UP Nov 12 20:55:45.031088 systemd-networkd[1480]: calif7c39d9d84a: Gained carrier Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.952 [INFO][5087] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0 coredns-7db6d8ff4d- kube-system cc7e1b18-e518-4260-a198-f799965c328d 835 0 2024-11-12 20:55:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4081.2.0-a-c73ec1ae7a coredns-7db6d8ff4d-9hddw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif7c39d9d84a [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.952 [INFO][5087] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.982 [INFO][5098] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" HandleID="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.995 [INFO][5098] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" HandleID="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319430), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4081.2.0-a-c73ec1ae7a", "pod":"coredns-7db6d8ff4d-9hddw", "timestamp":"2024-11-12 20:55:44.982740852 +0000 UTC"}, Hostname:"ci-4081.2.0-a-c73ec1ae7a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.995 [INFO][5098] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.995 [INFO][5098] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.995 [INFO][5098] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-c73ec1ae7a' Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:44.997 [INFO][5098] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.001 [INFO][5098] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.005 [INFO][5098] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.006 [INFO][5098] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.008 [INFO][5098] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.008 [INFO][5098] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.009 [INFO][5098] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.015 [INFO][5098] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.025 [INFO][5098] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.5/26] block=192.168.110.0/26 handle="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.025 [INFO][5098] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.5/26] handle="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.025 [INFO][5098] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:45.050126 containerd[1692]: 2024-11-12 20:55:45.025 [INFO][5098] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.5/26] IPv6=[] ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" HandleID="k8s-pod-network.db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.055472 containerd[1692]: 2024-11-12 20:55:45.027 [INFO][5087] cni-plugin/k8s.go 386: Populated endpoint ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cc7e1b18-e518-4260-a198-f799965c328d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"", Pod:"coredns-7db6d8ff4d-9hddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c39d9d84a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:45.055472 containerd[1692]: 2024-11-12 20:55:45.027 [INFO][5087] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.5/32] ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.055472 containerd[1692]: 2024-11-12 20:55:45.027 [INFO][5087] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif7c39d9d84a ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.055472 containerd[1692]: 2024-11-12 20:55:45.030 [INFO][5087] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.055472 containerd[1692]: 2024-11-12 20:55:45.031 [INFO][5087] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cc7e1b18-e518-4260-a198-f799965c328d", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f", Pod:"coredns-7db6d8ff4d-9hddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c39d9d84a", MAC:"5e:5e:f6:de:c1:8a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:45.055472 containerd[1692]: 2024-11-12 20:55:45.045 [INFO][5087] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f" Namespace="kube-system" Pod="coredns-7db6d8ff4d-9hddw" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:45.099377 containerd[1692]: time="2024-11-12T20:55:45.098857157Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:45.099377 containerd[1692]: time="2024-11-12T20:55:45.098954458Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:45.099377 containerd[1692]: time="2024-11-12T20:55:45.099000959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:45.099377 containerd[1692]: time="2024-11-12T20:55:45.099174761Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:45.144150 systemd[1]: Started cri-containerd-db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f.scope - libcontainer container db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f. Nov 12 20:55:45.184192 systemd-networkd[1480]: cali0f53dc0ce1b: Gained IPv6LL Nov 12 20:55:45.243988 containerd[1692]: time="2024-11-12T20:55:45.242277493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-9hddw,Uid:cc7e1b18-e518-4260-a198-f799965c328d,Namespace:kube-system,Attempt:1,} returns sandbox id \"db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f\"" Nov 12 20:55:45.247298 containerd[1692]: time="2024-11-12T20:55:45.246795048Z" level=info msg="CreateContainer within sandbox \"db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Nov 12 20:55:45.293642 containerd[1692]: time="2024-11-12T20:55:45.293437413Z" level=info msg="CreateContainer within sandbox \"db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"56fd7e8c8c8a57dc4165deb4b9065364036505de420b62d3156707941673a2dd\"" Nov 12 20:55:45.296006 containerd[1692]: time="2024-11-12T20:55:45.294238922Z" level=info msg="StartContainer for \"56fd7e8c8c8a57dc4165deb4b9065364036505de420b62d3156707941673a2dd\"" Nov 12 20:55:45.322129 systemd[1]: Started cri-containerd-56fd7e8c8c8a57dc4165deb4b9065364036505de420b62d3156707941673a2dd.scope - libcontainer container 56fd7e8c8c8a57dc4165deb4b9065364036505de420b62d3156707941673a2dd. Nov 12 20:55:45.357984 containerd[1692]: time="2024-11-12T20:55:45.356159272Z" level=info msg="StartContainer for \"56fd7e8c8c8a57dc4165deb4b9065364036505de420b62d3156707941673a2dd\" returns successfully" Nov 12 20:55:45.764458 containerd[1692]: time="2024-11-12T20:55:45.764406614Z" level=info msg="StopPodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\"" Nov 12 20:55:45.825172 systemd-networkd[1480]: calif9432c011cb: Gained IPv6LL Nov 12 20:55:45.938340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount728551106.mount: Deactivated successfully. Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.876 [INFO][5215] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.877 [INFO][5215] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" iface="eth0" netns="/var/run/netns/cni-273bcd39-2f1e-53c7-30f5-f4965b6044c6" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.878 [INFO][5215] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" iface="eth0" netns="/var/run/netns/cni-273bcd39-2f1e-53c7-30f5-f4965b6044c6" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.878 [INFO][5215] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" iface="eth0" netns="/var/run/netns/cni-273bcd39-2f1e-53c7-30f5-f4965b6044c6" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.878 [INFO][5215] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.878 [INFO][5215] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.956 [INFO][5221] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.957 [INFO][5221] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.957 [INFO][5221] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.967 [WARNING][5221] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.968 [INFO][5221] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.970 [INFO][5221] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:45.972817 containerd[1692]: 2024-11-12 20:55:45.971 [INFO][5215] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:45.977195 containerd[1692]: time="2024-11-12T20:55:45.976139177Z" level=info msg="TearDown network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" successfully" Nov 12 20:55:45.977195 containerd[1692]: time="2024-11-12T20:55:45.976178378Z" level=info msg="StopPodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" returns successfully" Nov 12 20:55:45.977334 containerd[1692]: time="2024-11-12T20:55:45.977232690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jt7lt,Uid:71853af8-7114-4e11-9b62-8d92def4793d,Namespace:calico-system,Attempt:1,}" Nov 12 20:55:45.978409 systemd[1]: run-netns-cni\x2d273bcd39\x2d2f1e\x2d53c7\x2d30f5\x2df4965b6044c6.mount: Deactivated successfully. Nov 12 20:55:46.055802 kubelet[3233]: I1112 20:55:46.055244 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-9hddw" podStartSLOduration=40.055220735 podStartE2EDuration="40.055220735s" podCreationTimestamp="2024-11-12 20:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-11-12 20:55:46.053440913 +0000 UTC m=+53.389062310" watchObservedRunningTime="2024-11-12 20:55:46.055220735 +0000 UTC m=+53.390842232" Nov 12 20:55:46.264844 systemd-networkd[1480]: cali5cd95e7c199: Link UP Nov 12 20:55:46.265604 systemd-networkd[1480]: cali5cd95e7c199: Gained carrier Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.129 [INFO][5228] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0 csi-node-driver- calico-system 71853af8-7114-4e11-9b62-8d92def4793d 843 0 2024-11-12 20:55:15 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:85bdc57578 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4081.2.0-a-c73ec1ae7a csi-node-driver-jt7lt eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali5cd95e7c199 [] []}} ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.130 [INFO][5228] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.198 [INFO][5245] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" HandleID="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.210 [INFO][5245] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" HandleID="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000287a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4081.2.0-a-c73ec1ae7a", "pod":"csi-node-driver-jt7lt", "timestamp":"2024-11-12 20:55:46.198350167 +0000 UTC"}, Hostname:"ci-4081.2.0-a-c73ec1ae7a", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.210 [INFO][5245] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.210 [INFO][5245] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.210 [INFO][5245] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4081.2.0-a-c73ec1ae7a' Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.212 [INFO][5245] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.217 [INFO][5245] ipam/ipam.go 372: Looking up existing affinities for host host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.225 [INFO][5245] ipam/ipam.go 489: Trying affinity for 192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.227 [INFO][5245] ipam/ipam.go 155: Attempting to load block cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.230 [INFO][5245] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.110.0/26 host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.230 [INFO][5245] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.110.0/26 handle="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.232 [INFO][5245] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.239 [INFO][5245] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.110.0/26 handle="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.252 [INFO][5245] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.110.6/26] block=192.168.110.0/26 handle="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.252 [INFO][5245] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.110.6/26] handle="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" host="ci-4081.2.0-a-c73ec1ae7a" Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.252 [INFO][5245] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:46.292136 containerd[1692]: 2024-11-12 20:55:46.252 [INFO][5245] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.110.6/26] IPv6=[] ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" HandleID="k8s-pod-network.20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.294239 containerd[1692]: 2024-11-12 20:55:46.256 [INFO][5228] cni-plugin/k8s.go 386: Populated endpoint ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71853af8-7114-4e11-9b62-8d92def4793d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"", Pod:"csi-node-driver-jt7lt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5cd95e7c199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:46.294239 containerd[1692]: 2024-11-12 20:55:46.256 [INFO][5228] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.110.6/32] ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.294239 containerd[1692]: 2024-11-12 20:55:46.256 [INFO][5228] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cd95e7c199 ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.294239 containerd[1692]: 2024-11-12 20:55:46.261 [INFO][5228] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.294239 containerd[1692]: 2024-11-12 20:55:46.261 [INFO][5228] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71853af8-7114-4e11-9b62-8d92def4793d", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb", Pod:"csi-node-driver-jt7lt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5cd95e7c199", MAC:"1a:74:b5:47:51:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:46.294239 containerd[1692]: 2024-11-12 20:55:46.289 [INFO][5228] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb" Namespace="calico-system" Pod="csi-node-driver-jt7lt" WorkloadEndpoint="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:46.341816 containerd[1692]: time="2024-11-12T20:55:46.341425299Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Nov 12 20:55:46.341816 containerd[1692]: time="2024-11-12T20:55:46.341531100Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Nov 12 20:55:46.341816 containerd[1692]: time="2024-11-12T20:55:46.341557701Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:46.341816 containerd[1692]: time="2024-11-12T20:55:46.341683802Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Nov 12 20:55:46.376251 systemd[1]: Started cri-containerd-20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb.scope - libcontainer container 20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb. Nov 12 20:55:46.413679 containerd[1692]: time="2024-11-12T20:55:46.413623473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jt7lt,Uid:71853af8-7114-4e11-9b62-8d92def4793d,Namespace:calico-system,Attempt:1,} returns sandbox id \"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb\"" Nov 12 20:55:46.925801 containerd[1692]: time="2024-11-12T20:55:46.925742273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:46.927665 containerd[1692]: time="2024-11-12T20:55:46.927594695Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=41963930" Nov 12 20:55:46.935719 containerd[1692]: time="2024-11-12T20:55:46.932177051Z" level=info msg="ImageCreate event name:\"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:46.937544 containerd[1692]: time="2024-11-12T20:55:46.937505815Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:46.938413 containerd[1692]: time="2024-11-12T20:55:46.938247724Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 3.427792295s" Nov 12 20:55:46.938413 containerd[1692]: time="2024-11-12T20:55:46.938288325Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:55:46.939946 containerd[1692]: time="2024-11-12T20:55:46.939500139Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\"" Nov 12 20:55:46.941929 containerd[1692]: time="2024-11-12T20:55:46.941900968Z" level=info msg="CreateContainer within sandbox \"3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:55:46.976121 systemd-networkd[1480]: calif7c39d9d84a: Gained IPv6LL Nov 12 20:55:46.983284 containerd[1692]: time="2024-11-12T20:55:46.983240369Z" level=info msg="CreateContainer within sandbox \"3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ff1e715e25f370106e032b5f1c1ac4ec8423cd3737fbc52ecf2bf4b6fb9e3892\"" Nov 12 20:55:46.984033 containerd[1692]: time="2024-11-12T20:55:46.983823676Z" level=info msg="StartContainer for \"ff1e715e25f370106e032b5f1c1ac4ec8423cd3737fbc52ecf2bf4b6fb9e3892\"" Nov 12 20:55:47.019131 systemd[1]: Started cri-containerd-ff1e715e25f370106e032b5f1c1ac4ec8423cd3737fbc52ecf2bf4b6fb9e3892.scope - libcontainer container ff1e715e25f370106e032b5f1c1ac4ec8423cd3737fbc52ecf2bf4b6fb9e3892. Nov 12 20:55:47.068946 containerd[1692]: time="2024-11-12T20:55:47.068886906Z" level=info msg="StartContainer for \"ff1e715e25f370106e032b5f1c1ac4ec8423cd3737fbc52ecf2bf4b6fb9e3892\" returns successfully" Nov 12 20:55:47.252350 containerd[1692]: time="2024-11-12T20:55:47.251349814Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:47.253808 containerd[1692]: time="2024-11-12T20:55:47.253368439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.0: active requests=0, bytes read=77" Nov 12 20:55:47.256365 containerd[1692]: time="2024-11-12T20:55:47.256325375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" with image id \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:548806adadee2058a3e93296913d1d47f490e9c8115d36abeb074a3f6576ad39\", size \"43457038\" in 316.792435ms" Nov 12 20:55:47.256477 containerd[1692]: time="2024-11-12T20:55:47.256461276Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.0\" returns image reference \"sha256:1beae95165532475bbbf9b20f89a88797a505fab874cc7146715dfbdbed0488a\"" Nov 12 20:55:47.258150 containerd[1692]: time="2024-11-12T20:55:47.258129996Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\"" Nov 12 20:55:47.261852 containerd[1692]: time="2024-11-12T20:55:47.261632739Z" level=info msg="CreateContainer within sandbox \"f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Nov 12 20:55:47.301682 containerd[1692]: time="2024-11-12T20:55:47.301633023Z" level=info msg="CreateContainer within sandbox \"f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"6224e9e4073ee49117bf688183a45874efd96d4c65efffe94e7c66dfc78ed707\"" Nov 12 20:55:47.304889 containerd[1692]: time="2024-11-12T20:55:47.304634559Z" level=info msg="StartContainer for \"6224e9e4073ee49117bf688183a45874efd96d4c65efffe94e7c66dfc78ed707\"" Nov 12 20:55:47.347314 systemd[1]: Started cri-containerd-6224e9e4073ee49117bf688183a45874efd96d4c65efffe94e7c66dfc78ed707.scope - libcontainer container 6224e9e4073ee49117bf688183a45874efd96d4c65efffe94e7c66dfc78ed707. Nov 12 20:55:47.428687 containerd[1692]: time="2024-11-12T20:55:47.428613760Z" level=info msg="StartContainer for \"6224e9e4073ee49117bf688183a45874efd96d4c65efffe94e7c66dfc78ed707\" returns successfully" Nov 12 20:55:47.937292 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1737606122.mount: Deactivated successfully. Nov 12 20:55:48.063419 kubelet[3233]: I1112 20:55:48.063327 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-777ccbcc85-8d9fq" podStartSLOduration=30.366238789 podStartE2EDuration="34.063304044s" podCreationTimestamp="2024-11-12 20:55:14 +0000 UTC" firstStartedPulling="2024-11-12 20:55:43.560402933 +0000 UTC m=+50.896024330" lastFinishedPulling="2024-11-12 20:55:47.257468088 +0000 UTC m=+54.593089585" observedRunningTime="2024-11-12 20:55:48.062413533 +0000 UTC m=+55.398034930" watchObservedRunningTime="2024-11-12 20:55:48.063304044 +0000 UTC m=+55.398925441" Nov 12 20:55:48.256165 systemd-networkd[1480]: cali5cd95e7c199: Gained IPv6LL Nov 12 20:55:48.634373 kubelet[3233]: I1112 20:55:48.634291 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-777ccbcc85-jc8rd" podStartSLOduration=31.202976218 podStartE2EDuration="34.634265455s" podCreationTimestamp="2024-11-12 20:55:14 +0000 UTC" firstStartedPulling="2024-11-12 20:55:43.5080628 +0000 UTC m=+50.843684297" lastFinishedPulling="2024-11-12 20:55:46.939352037 +0000 UTC m=+54.274973534" observedRunningTime="2024-11-12 20:55:48.083837592 +0000 UTC m=+55.419458989" watchObservedRunningTime="2024-11-12 20:55:48.634265455 +0000 UTC m=+55.969886852" Nov 12 20:55:49.055644 kubelet[3233]: I1112 20:55:49.055295 3233 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:55:50.226540 containerd[1692]: time="2024-11-12T20:55:50.226488030Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.229566 containerd[1692]: time="2024-11-12T20:55:50.229510367Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.0: active requests=0, bytes read=34152461" Nov 12 20:55:50.234920 containerd[1692]: time="2024-11-12T20:55:50.234873632Z" level=info msg="ImageCreate event name:\"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.241445 containerd[1692]: time="2024-11-12T20:55:50.241361710Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:50.242313 containerd[1692]: time="2024-11-12T20:55:50.242099519Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" with image id \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:8242cd7e9b9b505c73292dd812ce1669bca95cacc56d30687f49e6e0b95c5535\", size \"35645521\" in 2.98373852s" Nov 12 20:55:50.242313 containerd[1692]: time="2024-11-12T20:55:50.242139420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.0\" returns image reference \"sha256:48cc7c24253a8037ceea486888a8c75cd74cbf20752c30b86fae718f5a3fc134\"" Nov 12 20:55:50.244804 containerd[1692]: time="2024-11-12T20:55:50.243359434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\"" Nov 12 20:55:50.260070 containerd[1692]: time="2024-11-12T20:55:50.256948699Z" level=info msg="CreateContainer within sandbox \"74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Nov 12 20:55:50.300775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3726964400.mount: Deactivated successfully. Nov 12 20:55:50.310249 containerd[1692]: time="2024-11-12T20:55:50.310142143Z" level=info msg="CreateContainer within sandbox \"74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f\"" Nov 12 20:55:50.312049 containerd[1692]: time="2024-11-12T20:55:50.310863252Z" level=info msg="StartContainer for \"c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f\"" Nov 12 20:55:50.347145 systemd[1]: Started cri-containerd-c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f.scope - libcontainer container c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f. Nov 12 20:55:50.418399 containerd[1692]: time="2024-11-12T20:55:50.418234851Z" level=info msg="StartContainer for \"c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f\" returns successfully" Nov 12 20:55:51.088317 kubelet[3233]: I1112 20:55:51.088248 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5bf77db8d-lgc2s" podStartSLOduration=30.095624913 podStartE2EDuration="36.088226457s" podCreationTimestamp="2024-11-12 20:55:15 +0000 UTC" firstStartedPulling="2024-11-12 20:55:44.250570788 +0000 UTC m=+51.586192285" lastFinishedPulling="2024-11-12 20:55:50.243172432 +0000 UTC m=+57.578793829" observedRunningTime="2024-11-12 20:55:51.085428124 +0000 UTC m=+58.421049521" watchObservedRunningTime="2024-11-12 20:55:51.088226457 +0000 UTC m=+58.423847954" Nov 12 20:55:51.752751 containerd[1692]: time="2024-11-12T20:55:51.751825808Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:51.763663 containerd[1692]: time="2024-11-12T20:55:51.763127942Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.0: active requests=0, bytes read=7902635" Nov 12 20:55:51.770573 containerd[1692]: time="2024-11-12T20:55:51.770532029Z" level=info msg="ImageCreate event name:\"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:51.777062 containerd[1692]: time="2024-11-12T20:55:51.777019406Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:51.777844 containerd[1692]: time="2024-11-12T20:55:51.777740515Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.0\" with image id \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:034dac492808ec38cd5e596ef6c97d7cd01aaab29a4952c746b27c75ecab8cf5\", size \"9395727\" in 1.53434588s" Nov 12 20:55:51.777942 containerd[1692]: time="2024-11-12T20:55:51.777851616Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.0\" returns image reference \"sha256:a58f4c4b5a7fc2dc0036f198a37464aa007ff2dfe31c8fddad993477291bea46\"" Nov 12 20:55:51.780580 containerd[1692]: time="2024-11-12T20:55:51.780537348Z" level=info msg="CreateContainer within sandbox \"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Nov 12 20:55:51.844099 containerd[1692]: time="2024-11-12T20:55:51.844053799Z" level=info msg="CreateContainer within sandbox \"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b8416abb67baff5ed207306704d13be854a3e4159ac8c2a511009629d5671159\"" Nov 12 20:55:51.844724 containerd[1692]: time="2024-11-12T20:55:51.844688507Z" level=info msg="StartContainer for \"b8416abb67baff5ed207306704d13be854a3e4159ac8c2a511009629d5671159\"" Nov 12 20:55:51.878140 systemd[1]: Started cri-containerd-b8416abb67baff5ed207306704d13be854a3e4159ac8c2a511009629d5671159.scope - libcontainer container b8416abb67baff5ed207306704d13be854a3e4159ac8c2a511009629d5671159. Nov 12 20:55:51.909594 containerd[1692]: time="2024-11-12T20:55:51.909448873Z" level=info msg="StartContainer for \"b8416abb67baff5ed207306704d13be854a3e4159ac8c2a511009629d5671159\" returns successfully" Nov 12 20:55:51.910816 containerd[1692]: time="2024-11-12T20:55:51.910678288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\"" Nov 12 20:55:53.197081 containerd[1692]: time="2024-11-12T20:55:53.197033207Z" level=info msg="StopPodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\"" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.240 [WARNING][5535] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"71fd5972-91e2-40bf-bb76-afa53f7f5d20", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0", Pod:"calico-apiserver-777ccbcc85-jc8rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7664069681b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.240 [INFO][5535] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.241 [INFO][5535] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" iface="eth0" netns="" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.241 [INFO][5535] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.241 [INFO][5535] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.264 [INFO][5541] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.264 [INFO][5541] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.264 [INFO][5541] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.271 [WARNING][5541] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.271 [INFO][5541] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.273 [INFO][5541] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.275391 containerd[1692]: 2024-11-12 20:55:53.274 [INFO][5535] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.276534 containerd[1692]: time="2024-11-12T20:55:53.275393434Z" level=info msg="TearDown network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" successfully" Nov 12 20:55:53.276534 containerd[1692]: time="2024-11-12T20:55:53.275426135Z" level=info msg="StopPodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" returns successfully" Nov 12 20:55:53.276613 containerd[1692]: time="2024-11-12T20:55:53.276579248Z" level=info msg="RemovePodSandbox for \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\"" Nov 12 20:55:53.276653 containerd[1692]: time="2024-11-12T20:55:53.276614249Z" level=info msg="Forcibly stopping sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\"" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.313 [WARNING][5560] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"71fd5972-91e2-40bf-bb76-afa53f7f5d20", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"3e7c33a577f809887fd287bd365ccbd80750f84fcd4a0383cd535866bc40fea0", Pod:"calico-apiserver-777ccbcc85-jc8rd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali7664069681b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.314 [INFO][5560] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.314 [INFO][5560] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" iface="eth0" netns="" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.314 [INFO][5560] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.314 [INFO][5560] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.334 [INFO][5566] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.334 [INFO][5566] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.334 [INFO][5566] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.340 [WARNING][5566] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.340 [INFO][5566] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" HandleID="k8s-pod-network.3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--jc8rd-eth0" Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.342 [INFO][5566] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.344534 containerd[1692]: 2024-11-12 20:55:53.343 [INFO][5560] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8" Nov 12 20:55:53.345237 containerd[1692]: time="2024-11-12T20:55:53.344588553Z" level=info msg="TearDown network for sandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" successfully" Nov 12 20:55:53.354705 containerd[1692]: time="2024-11-12T20:55:53.354658072Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:53.354858 containerd[1692]: time="2024-11-12T20:55:53.354807674Z" level=info msg="RemovePodSandbox \"3ef55c79f0aed4ada90f214e22d9bfc72ddbd7df2a70e0931bdd6dad149a08c8\" returns successfully" Nov 12 20:55:53.355496 containerd[1692]: time="2024-11-12T20:55:53.355462182Z" level=info msg="StopPodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\"" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.388 [WARNING][5584] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cc7e1b18-e518-4260-a198-f799965c328d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f", Pod:"coredns-7db6d8ff4d-9hddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c39d9d84a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.388 [INFO][5584] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.388 [INFO][5584] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" iface="eth0" netns="" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.388 [INFO][5584] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.388 [INFO][5584] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.407 [INFO][5590] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.407 [INFO][5590] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.407 [INFO][5590] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.413 [WARNING][5590] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.413 [INFO][5590] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.415 [INFO][5590] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.416885 containerd[1692]: 2024-11-12 20:55:53.415 [INFO][5584] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.417532 containerd[1692]: time="2024-11-12T20:55:53.416982709Z" level=info msg="TearDown network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" successfully" Nov 12 20:55:53.417532 containerd[1692]: time="2024-11-12T20:55:53.417020410Z" level=info msg="StopPodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" returns successfully" Nov 12 20:55:53.417616 containerd[1692]: time="2024-11-12T20:55:53.417530816Z" level=info msg="RemovePodSandbox for \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\"" Nov 12 20:55:53.417616 containerd[1692]: time="2024-11-12T20:55:53.417568316Z" level=info msg="Forcibly stopping sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\"" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.453 [WARNING][5608] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"cc7e1b18-e518-4260-a198-f799965c328d", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"db65d5a6d8959facec9d3b9e17bf02aa98dc719d87a5495dbeb6d6371329eb3f", Pod:"coredns-7db6d8ff4d-9hddw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif7c39d9d84a", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.453 [INFO][5608] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.453 [INFO][5608] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" iface="eth0" netns="" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.453 [INFO][5608] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.453 [INFO][5608] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.475 [INFO][5614] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.476 [INFO][5614] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.476 [INFO][5614] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.481 [WARNING][5614] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.481 [INFO][5614] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" HandleID="k8s-pod-network.1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--9hddw-eth0" Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.482 [INFO][5614] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.484861 containerd[1692]: 2024-11-12 20:55:53.483 [INFO][5608] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d" Nov 12 20:55:53.484861 containerd[1692]: time="2024-11-12T20:55:53.484566109Z" level=info msg="TearDown network for sandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" successfully" Nov 12 20:55:53.494174 containerd[1692]: time="2024-11-12T20:55:53.494132522Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:53.494286 containerd[1692]: time="2024-11-12T20:55:53.494203723Z" level=info msg="RemovePodSandbox \"1974eadd2828c196c92cc12a4e9bdf4a022c7680983ac77ea1ac087295dafd2d\" returns successfully" Nov 12 20:55:53.494765 containerd[1692]: time="2024-11-12T20:55:53.494729129Z" level=info msg="StopPodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\"" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.549 [WARNING][5632] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b2c03e2-3981-44eb-8347-9662269e0c09", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7", Pod:"coredns-7db6d8ff4d-km9bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96ebb4ddc94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.549 [INFO][5632] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.549 [INFO][5632] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" iface="eth0" netns="" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.549 [INFO][5632] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.550 [INFO][5632] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.576 [INFO][5640] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.577 [INFO][5640] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.577 [INFO][5640] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.583 [WARNING][5640] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.583 [INFO][5640] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.586 [INFO][5640] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.589327 containerd[1692]: 2024-11-12 20:55:53.587 [INFO][5632] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.590227 containerd[1692]: time="2024-11-12T20:55:53.589376749Z" level=info msg="TearDown network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" successfully" Nov 12 20:55:53.590227 containerd[1692]: time="2024-11-12T20:55:53.589408549Z" level=info msg="StopPodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" returns successfully" Nov 12 20:55:53.591002 containerd[1692]: time="2024-11-12T20:55:53.590644564Z" level=info msg="RemovePodSandbox for \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\"" Nov 12 20:55:53.591002 containerd[1692]: time="2024-11-12T20:55:53.590682165Z" level=info msg="Forcibly stopping sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\"" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.639 [WARNING][5658] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"9b2c03e2-3981-44eb-8347-9662269e0c09", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"4cfc9c452f54b75e8c6808dd5895f640380e1ec3debaca4f5bd78179a08737a7", Pod:"coredns-7db6d8ff4d-km9bd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.110.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali96ebb4ddc94", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.639 [INFO][5658] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.639 [INFO][5658] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" iface="eth0" netns="" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.639 [INFO][5658] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.639 [INFO][5658] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.660 [INFO][5665] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.660 [INFO][5665] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.660 [INFO][5665] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.669 [WARNING][5665] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.669 [INFO][5665] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" HandleID="k8s-pod-network.6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-coredns--7db6d8ff4d--km9bd-eth0" Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.672 [INFO][5665] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.676860 containerd[1692]: 2024-11-12 20:55:53.673 [INFO][5658] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c" Nov 12 20:55:53.677983 containerd[1692]: time="2024-11-12T20:55:53.677678594Z" level=info msg="TearDown network for sandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" successfully" Nov 12 20:55:53.690694 containerd[1692]: time="2024-11-12T20:55:53.690492645Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:53.690694 containerd[1692]: time="2024-11-12T20:55:53.690574946Z" level=info msg="RemovePodSandbox \"6198b89071c019a2c169754d7085a90dbc3dab2ff089e9917c05ae53eafd001c\" returns successfully" Nov 12 20:55:53.691834 containerd[1692]: time="2024-11-12T20:55:53.691750560Z" level=info msg="StopPodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\"" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.724 [WARNING][5683] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71853af8-7114-4e11-9b62-8d92def4793d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb", Pod:"csi-node-driver-jt7lt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5cd95e7c199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.724 [INFO][5683] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.724 [INFO][5683] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" iface="eth0" netns="" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.724 [INFO][5683] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.724 [INFO][5683] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.745 [INFO][5689] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.746 [INFO][5689] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.746 [INFO][5689] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.750 [WARNING][5689] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.750 [INFO][5689] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.752 [INFO][5689] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.754371 containerd[1692]: 2024-11-12 20:55:53.753 [INFO][5683] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.754371 containerd[1692]: time="2024-11-12T20:55:53.754337601Z" level=info msg="TearDown network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" successfully" Nov 12 20:55:53.754371 containerd[1692]: time="2024-11-12T20:55:53.754366201Z" level=info msg="StopPodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" returns successfully" Nov 12 20:55:53.755117 containerd[1692]: time="2024-11-12T20:55:53.754887607Z" level=info msg="RemovePodSandbox for \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\"" Nov 12 20:55:53.755117 containerd[1692]: time="2024-11-12T20:55:53.754921708Z" level=info msg="Forcibly stopping sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\"" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.797 [WARNING][5707] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"71853af8-7114-4e11-9b62-8d92def4793d", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"85bdc57578", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb", Pod:"csi-node-driver-jt7lt", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.110.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali5cd95e7c199", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.797 [INFO][5707] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.797 [INFO][5707] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" iface="eth0" netns="" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.797 [INFO][5707] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.797 [INFO][5707] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.833 [INFO][5718] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.833 [INFO][5718] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.833 [INFO][5718] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.841 [WARNING][5718] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.841 [INFO][5718] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" HandleID="k8s-pod-network.d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-csi--node--driver--jt7lt-eth0" Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.843 [INFO][5718] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.847737 containerd[1692]: 2024-11-12 20:55:53.845 [INFO][5707] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd" Nov 12 20:55:53.847737 containerd[1692]: time="2024-11-12T20:55:53.847168499Z" level=info msg="TearDown network for sandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" successfully" Nov 12 20:55:53.860045 containerd[1692]: time="2024-11-12T20:55:53.859870749Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:53.860528 containerd[1692]: time="2024-11-12T20:55:53.860184253Z" level=info msg="RemovePodSandbox \"d7d23e7fa7a7a416edecb3505f7075e165c93112ca8b86ea94a2895974b4d1fd\" returns successfully" Nov 12 20:55:53.860694 containerd[1692]: time="2024-11-12T20:55:53.860664759Z" level=info msg="StopPodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\"" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.917 [WARNING][5736] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3507c8-8c76-43ce-abb3-9bd37cc91cb6", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74", Pod:"calico-apiserver-777ccbcc85-8d9fq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f53dc0ce1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.917 [INFO][5736] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.917 [INFO][5736] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" iface="eth0" netns="" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.917 [INFO][5736] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.917 [INFO][5736] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.953 [INFO][5743] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.953 [INFO][5743] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.953 [INFO][5743] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.961 [WARNING][5743] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.961 [INFO][5743] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.964 [INFO][5743] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:53.969945 containerd[1692]: 2024-11-12 20:55:53.967 [INFO][5736] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:53.969945 containerd[1692]: time="2024-11-12T20:55:53.969783950Z" level=info msg="TearDown network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" successfully" Nov 12 20:55:53.969945 containerd[1692]: time="2024-11-12T20:55:53.969814850Z" level=info msg="StopPodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" returns successfully" Nov 12 20:55:53.971917 containerd[1692]: time="2024-11-12T20:55:53.971661072Z" level=info msg="RemovePodSandbox for \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\"" Nov 12 20:55:53.971917 containerd[1692]: time="2024-11-12T20:55:53.971699073Z" level=info msg="Forcibly stopping sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\"" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.023 [WARNING][5762] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0", GenerateName:"calico-apiserver-777ccbcc85-", Namespace:"calico-apiserver", SelfLink:"", UID:"be3507c8-8c76-43ce-abb3-9bd37cc91cb6", ResourceVersion:"874", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"777ccbcc85", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"f5d65da9f7cea32430fc407be1d795b96f397394e764d198ba4c59e431ddeb74", Pod:"calico-apiserver-777ccbcc85-8d9fq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.110.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0f53dc0ce1b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.023 [INFO][5762] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.023 [INFO][5762] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" iface="eth0" netns="" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.023 [INFO][5762] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.024 [INFO][5762] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.060 [INFO][5768] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.062 [INFO][5768] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.062 [INFO][5768] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.069 [WARNING][5768] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.069 [INFO][5768] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" HandleID="k8s-pod-network.d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--apiserver--777ccbcc85--8d9fq-eth0" Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.071 [INFO][5768] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:54.078489 containerd[1692]: 2024-11-12 20:55:54.074 [INFO][5762] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb" Nov 12 20:55:54.080003 containerd[1692]: time="2024-11-12T20:55:54.079557049Z" level=info msg="TearDown network for sandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" successfully" Nov 12 20:55:54.094694 containerd[1692]: time="2024-11-12T20:55:54.094654527Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:54.095024 containerd[1692]: time="2024-11-12T20:55:54.094913330Z" level=info msg="RemovePodSandbox \"d5d31a51964e74bda15f3281a822fe48ac4dab929eddd6d1805d6f9d3d4b2eeb\" returns successfully" Nov 12 20:55:54.095416 containerd[1692]: time="2024-11-12T20:55:54.095388836Z" level=info msg="StopPodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\"" Nov 12 20:55:54.115086 containerd[1692]: time="2024-11-12T20:55:54.115017668Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:54.118857 containerd[1692]: time="2024-11-12T20:55:54.118669411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0: active requests=0, bytes read=10501080" Nov 12 20:55:54.122220 containerd[1692]: time="2024-11-12T20:55:54.122168753Z" level=info msg="ImageCreate event name:\"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:54.128257 containerd[1692]: time="2024-11-12T20:55:54.128197624Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Nov 12 20:55:54.130320 containerd[1692]: time="2024-11-12T20:55:54.130209148Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" with image id \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:69153d7038238f84185e52b4a84e11c5cf5af716ef8613fb0a475ea311dca0cb\", size \"11994124\" in 2.21949196s" Nov 12 20:55:54.130320 containerd[1692]: time="2024-11-12T20:55:54.130266849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.0\" returns image reference \"sha256:448cca84519399c3138626aff1a43b0b9168ecbe27e0e8e6df63416012eeeaae\"" Nov 12 20:55:54.134267 containerd[1692]: time="2024-11-12T20:55:54.134167695Z" level=info msg="CreateContainer within sandbox \"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Nov 12 20:55:54.169847 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190487995.mount: Deactivated successfully. Nov 12 20:55:54.174316 containerd[1692]: time="2024-11-12T20:55:54.174182468Z" level=info msg="CreateContainer within sandbox \"20c6dfd53fec830dd0cfadc3aea347e87baaca96b78cbbdfada3fab26a49aadb\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"175444f583be645e2df4004bd493edf48ebbe05ecf6de7dd6876b9afe1af69e5\"" Nov 12 20:55:54.174876 containerd[1692]: time="2024-11-12T20:55:54.174806076Z" level=info msg="StartContainer for \"175444f583be645e2df4004bd493edf48ebbe05ecf6de7dd6876b9afe1af69e5\"" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.130 [WARNING][5788] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0", GenerateName:"calico-kube-controllers-5bf77db8d-", Namespace:"calico-system", SelfLink:"", UID:"af49ceee-a159-42af-9ac7-30f86ab527d2", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf77db8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878", Pod:"calico-kube-controllers-5bf77db8d-lgc2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9432c011cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.131 [INFO][5788] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.132 [INFO][5788] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" iface="eth0" netns="" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.132 [INFO][5788] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.132 [INFO][5788] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.155 [INFO][5794] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.156 [INFO][5794] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.156 [INFO][5794] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.168 [WARNING][5794] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.168 [INFO][5794] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.171 [INFO][5794] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:54.176463 containerd[1692]: 2024-11-12 20:55:54.173 [INFO][5788] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.177033 containerd[1692]: time="2024-11-12T20:55:54.176619097Z" level=info msg="TearDown network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" successfully" Nov 12 20:55:54.177033 containerd[1692]: time="2024-11-12T20:55:54.176644197Z" level=info msg="StopPodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" returns successfully" Nov 12 20:55:54.177741 containerd[1692]: time="2024-11-12T20:55:54.177697610Z" level=info msg="RemovePodSandbox for \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\"" Nov 12 20:55:54.177818 containerd[1692]: time="2024-11-12T20:55:54.177736210Z" level=info msg="Forcibly stopping sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\"" Nov 12 20:55:54.229210 systemd[1]: Started cri-containerd-175444f583be645e2df4004bd493edf48ebbe05ecf6de7dd6876b9afe1af69e5.scope - libcontainer container 175444f583be645e2df4004bd493edf48ebbe05ecf6de7dd6876b9afe1af69e5. Nov 12 20:55:54.287791 containerd[1692]: time="2024-11-12T20:55:54.287622110Z" level=info msg="StartContainer for \"175444f583be645e2df4004bd493edf48ebbe05ecf6de7dd6876b9afe1af69e5\" returns successfully" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.253 [WARNING][5819] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0", GenerateName:"calico-kube-controllers-5bf77db8d-", Namespace:"calico-system", SelfLink:"", UID:"af49ceee-a159-42af-9ac7-30f86ab527d2", ResourceVersion:"895", Generation:0, CreationTimestamp:time.Date(2024, time.November, 12, 20, 55, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5bf77db8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4081.2.0-a-c73ec1ae7a", ContainerID:"74f7f70297be7c2e89fea5a81251e413e0dd11bf6ad79d04c0800ce254c86878", Pod:"calico-kube-controllers-5bf77db8d-lgc2s", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.110.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calif9432c011cb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.253 [INFO][5819] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.253 [INFO][5819] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" iface="eth0" netns="" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.253 [INFO][5819] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.254 [INFO][5819] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.284 [INFO][5843] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.286 [INFO][5843] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.286 [INFO][5843] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.292 [WARNING][5843] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.292 [INFO][5843] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" HandleID="k8s-pod-network.4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Workload="ci--4081.2.0--a--c73ec1ae7a-k8s-calico--kube--controllers--5bf77db8d--lgc2s-eth0" Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.297 [INFO][5843] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Nov 12 20:55:54.301237 containerd[1692]: 2024-11-12 20:55:54.300 [INFO][5819] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb" Nov 12 20:55:54.301237 containerd[1692]: time="2024-11-12T20:55:54.301167571Z" level=info msg="TearDown network for sandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" successfully" Nov 12 20:55:54.309264 containerd[1692]: time="2024-11-12T20:55:54.309217666Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Nov 12 20:55:54.309471 containerd[1692]: time="2024-11-12T20:55:54.309289467Z" level=info msg="RemovePodSandbox \"4e716f7641a723d7ef47a74f40ca8a4ac52d89d4829f78ba18a3dc01648ea9cb\" returns successfully" Nov 12 20:55:55.114159 kubelet[3233]: I1112 20:55:55.113122 3233 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-jt7lt" podStartSLOduration=32.397384714 podStartE2EDuration="40.113095977s" podCreationTimestamp="2024-11-12 20:55:15 +0000 UTC" firstStartedPulling="2024-11-12 20:55:46.415889901 +0000 UTC m=+53.751511398" lastFinishedPulling="2024-11-12 20:55:54.131601164 +0000 UTC m=+61.467222661" observedRunningTime="2024-11-12 20:55:55.112043465 +0000 UTC m=+62.447664862" watchObservedRunningTime="2024-11-12 20:55:55.113095977 +0000 UTC m=+62.448717474" Nov 12 20:55:55.251472 kubelet[3233]: I1112 20:55:55.251098 3233 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Nov 12 20:55:55.251472 kubelet[3233]: I1112 20:55:55.251137 3233 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Nov 12 20:55:59.606194 systemd[1]: run-containerd-runc-k8s.io-c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f-runc.pg7cY5.mount: Deactivated successfully. Nov 12 20:56:17.413275 kubelet[3233]: I1112 20:56:17.412806 3233 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Nov 12 20:56:23.574898 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:33588.service - OpenSSH per-connection server daemon (10.200.16.10:33588). Nov 12 20:56:24.206048 sshd[5928]: Accepted publickey for core from 10.200.16.10 port 33588 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:24.208756 sshd[5928]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:24.215423 systemd-logind[1670]: New session 10 of user core. Nov 12 20:56:24.223178 systemd[1]: Started session-10.scope - Session 10 of User core. Nov 12 20:56:24.732673 sshd[5928]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:24.736800 systemd-logind[1670]: Session 10 logged out. Waiting for processes to exit. Nov 12 20:56:24.737729 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:33588.service: Deactivated successfully. Nov 12 20:56:24.740328 systemd[1]: session-10.scope: Deactivated successfully. Nov 12 20:56:24.741539 systemd-logind[1670]: Removed session 10. Nov 12 20:56:29.857068 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:49392.service - OpenSSH per-connection server daemon (10.200.16.10:49392). Nov 12 20:56:30.480677 sshd[5979]: Accepted publickey for core from 10.200.16.10 port 49392 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:30.483084 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:30.492934 systemd-logind[1670]: New session 11 of user core. Nov 12 20:56:30.499137 systemd[1]: Started session-11.scope - Session 11 of User core. Nov 12 20:56:30.986637 sshd[5979]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:30.992362 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:49392.service: Deactivated successfully. Nov 12 20:56:30.992557 systemd-logind[1670]: Session 11 logged out. Waiting for processes to exit. Nov 12 20:56:30.995362 systemd[1]: session-11.scope: Deactivated successfully. Nov 12 20:56:30.996424 systemd-logind[1670]: Removed session 11. Nov 12 20:56:36.103276 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:49402.service - OpenSSH per-connection server daemon (10.200.16.10:49402). Nov 12 20:56:36.724120 sshd[5993]: Accepted publickey for core from 10.200.16.10 port 49402 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:36.724764 sshd[5993]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:36.729765 systemd-logind[1670]: New session 12 of user core. Nov 12 20:56:36.734124 systemd[1]: Started session-12.scope - Session 12 of User core. Nov 12 20:56:37.222315 sshd[5993]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:37.227069 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:49402.service: Deactivated successfully. Nov 12 20:56:37.227148 systemd-logind[1670]: Session 12 logged out. Waiting for processes to exit. Nov 12 20:56:37.229606 systemd[1]: session-12.scope: Deactivated successfully. Nov 12 20:56:37.230593 systemd-logind[1670]: Removed session 12. Nov 12 20:56:37.337287 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:49410.service - OpenSSH per-connection server daemon (10.200.16.10:49410). Nov 12 20:56:37.960559 sshd[6009]: Accepted publickey for core from 10.200.16.10 port 49410 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:37.961208 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:37.966278 systemd-logind[1670]: New session 13 of user core. Nov 12 20:56:37.969174 systemd[1]: Started session-13.scope - Session 13 of User core. Nov 12 20:56:38.500216 sshd[6009]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:38.506357 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:49410.service: Deactivated successfully. Nov 12 20:56:38.508666 systemd[1]: session-13.scope: Deactivated successfully. Nov 12 20:56:38.509444 systemd-logind[1670]: Session 13 logged out. Waiting for processes to exit. Nov 12 20:56:38.510659 systemd-logind[1670]: Removed session 13. Nov 12 20:56:38.614536 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:46104.service - OpenSSH per-connection server daemon (10.200.16.10:46104). Nov 12 20:56:39.233993 sshd[6020]: Accepted publickey for core from 10.200.16.10 port 46104 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:39.234900 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:39.239728 systemd-logind[1670]: New session 14 of user core. Nov 12 20:56:39.244111 systemd[1]: Started session-14.scope - Session 14 of User core. Nov 12 20:56:39.732600 sshd[6020]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:39.735546 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:46104.service: Deactivated successfully. Nov 12 20:56:39.738437 systemd[1]: session-14.scope: Deactivated successfully. Nov 12 20:56:39.740704 systemd-logind[1670]: Session 14 logged out. Waiting for processes to exit. Nov 12 20:56:39.745895 systemd-logind[1670]: Removed session 14. Nov 12 20:56:44.848285 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:46112.service - OpenSSH per-connection server daemon (10.200.16.10:46112). Nov 12 20:56:45.470305 sshd[6033]: Accepted publickey for core from 10.200.16.10 port 46112 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:45.471801 sshd[6033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:45.476019 systemd-logind[1670]: New session 15 of user core. Nov 12 20:56:45.482135 systemd[1]: Started session-15.scope - Session 15 of User core. Nov 12 20:56:45.967897 sshd[6033]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:45.972568 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:46112.service: Deactivated successfully. Nov 12 20:56:45.975326 systemd[1]: session-15.scope: Deactivated successfully. Nov 12 20:56:45.976166 systemd-logind[1670]: Session 15 logged out. Waiting for processes to exit. Nov 12 20:56:45.977188 systemd-logind[1670]: Removed session 15. Nov 12 20:56:51.085317 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:54312.service - OpenSSH per-connection server daemon (10.200.16.10:54312). Nov 12 20:56:51.712271 sshd[6046]: Accepted publickey for core from 10.200.16.10 port 54312 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:51.714596 sshd[6046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:51.722051 systemd-logind[1670]: New session 16 of user core. Nov 12 20:56:51.727164 systemd[1]: Started session-16.scope - Session 16 of User core. Nov 12 20:56:52.213685 sshd[6046]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:52.217346 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:54312.service: Deactivated successfully. Nov 12 20:56:52.219831 systemd[1]: session-16.scope: Deactivated successfully. Nov 12 20:56:52.221792 systemd-logind[1670]: Session 16 logged out. Waiting for processes to exit. Nov 12 20:56:52.223271 systemd-logind[1670]: Removed session 16. Nov 12 20:56:57.333289 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:54324.service - OpenSSH per-connection server daemon (10.200.16.10:54324). Nov 12 20:56:57.953598 sshd[6087]: Accepted publickey for core from 10.200.16.10 port 54324 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:56:57.955348 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:56:57.960525 systemd-logind[1670]: New session 17 of user core. Nov 12 20:56:57.968132 systemd[1]: Started session-17.scope - Session 17 of User core. Nov 12 20:56:58.452842 sshd[6087]: pam_unix(sshd:session): session closed for user core Nov 12 20:56:58.459470 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:54324.service: Deactivated successfully. Nov 12 20:56:58.462903 systemd[1]: session-17.scope: Deactivated successfully. Nov 12 20:56:58.464922 systemd-logind[1670]: Session 17 logged out. Waiting for processes to exit. Nov 12 20:56:58.468439 systemd-logind[1670]: Removed session 17. Nov 12 20:56:59.600244 systemd[1]: run-containerd-runc-k8s.io-c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f-runc.tGv7a0.mount: Deactivated successfully. Nov 12 20:57:03.564150 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:39226.service - OpenSSH per-connection server daemon (10.200.16.10:39226). Nov 12 20:57:04.196421 sshd[6125]: Accepted publickey for core from 10.200.16.10 port 39226 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:04.198193 sshd[6125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:04.203833 systemd-logind[1670]: New session 18 of user core. Nov 12 20:57:04.209157 systemd[1]: Started session-18.scope - Session 18 of User core. Nov 12 20:57:04.708385 sshd[6125]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:04.711546 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:39226.service: Deactivated successfully. Nov 12 20:57:04.713852 systemd[1]: session-18.scope: Deactivated successfully. Nov 12 20:57:04.715503 systemd-logind[1670]: Session 18 logged out. Waiting for processes to exit. Nov 12 20:57:04.717282 systemd-logind[1670]: Removed session 18. Nov 12 20:57:04.824268 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:39228.service - OpenSSH per-connection server daemon (10.200.16.10:39228). Nov 12 20:57:05.448214 sshd[6138]: Accepted publickey for core from 10.200.16.10 port 39228 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:05.450036 sshd[6138]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:05.455191 systemd-logind[1670]: New session 19 of user core. Nov 12 20:57:05.459136 systemd[1]: Started session-19.scope - Session 19 of User core. Nov 12 20:57:06.029597 sshd[6138]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:06.033886 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:39228.service: Deactivated successfully. Nov 12 20:57:06.036755 systemd[1]: session-19.scope: Deactivated successfully. Nov 12 20:57:06.038327 systemd-logind[1670]: Session 19 logged out. Waiting for processes to exit. Nov 12 20:57:06.039397 systemd-logind[1670]: Removed session 19. Nov 12 20:57:06.143296 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:39234.service - OpenSSH per-connection server daemon (10.200.16.10:39234). Nov 12 20:57:06.765941 sshd[6149]: Accepted publickey for core from 10.200.16.10 port 39234 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:06.767675 sshd[6149]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:06.772249 systemd-logind[1670]: New session 20 of user core. Nov 12 20:57:06.778148 systemd[1]: Started session-20.scope - Session 20 of User core. Nov 12 20:57:09.200164 sshd[6149]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:09.203392 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:39234.service: Deactivated successfully. Nov 12 20:57:09.205561 systemd[1]: session-20.scope: Deactivated successfully. Nov 12 20:57:09.207504 systemd-logind[1670]: Session 20 logged out. Waiting for processes to exit. Nov 12 20:57:09.208628 systemd-logind[1670]: Removed session 20. Nov 12 20:57:09.314900 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:39488.service - OpenSSH per-connection server daemon (10.200.16.10:39488). Nov 12 20:57:09.940697 sshd[6172]: Accepted publickey for core from 10.200.16.10 port 39488 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:09.942569 sshd[6172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:09.948009 systemd-logind[1670]: New session 21 of user core. Nov 12 20:57:09.951157 systemd[1]: Started session-21.scope - Session 21 of User core. Nov 12 20:57:10.544266 sshd[6172]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:10.548072 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:39488.service: Deactivated successfully. Nov 12 20:57:10.550807 systemd[1]: session-21.scope: Deactivated successfully. Nov 12 20:57:10.552561 systemd-logind[1670]: Session 21 logged out. Waiting for processes to exit. Nov 12 20:57:10.554325 systemd-logind[1670]: Removed session 21. Nov 12 20:57:10.679328 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:39504.service - OpenSSH per-connection server daemon (10.200.16.10:39504). Nov 12 20:57:11.297301 sshd[6183]: Accepted publickey for core from 10.200.16.10 port 39504 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:11.299327 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:11.304009 systemd-logind[1670]: New session 22 of user core. Nov 12 20:57:11.309139 systemd[1]: Started session-22.scope - Session 22 of User core. Nov 12 20:57:11.803856 sshd[6183]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:11.808199 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:39504.service: Deactivated successfully. Nov 12 20:57:11.810910 systemd[1]: session-22.scope: Deactivated successfully. Nov 12 20:57:11.811726 systemd-logind[1670]: Session 22 logged out. Waiting for processes to exit. Nov 12 20:57:11.812763 systemd-logind[1670]: Removed session 22. Nov 12 20:57:16.919328 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:39514.service - OpenSSH per-connection server daemon (10.200.16.10:39514). Nov 12 20:57:17.549653 sshd[6199]: Accepted publickey for core from 10.200.16.10 port 39514 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:17.551340 sshd[6199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:17.556044 systemd-logind[1670]: New session 23 of user core. Nov 12 20:57:17.561152 systemd[1]: Started session-23.scope - Session 23 of User core. Nov 12 20:57:18.048521 sshd[6199]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:18.052305 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:39514.service: Deactivated successfully. Nov 12 20:57:18.055195 systemd[1]: session-23.scope: Deactivated successfully. Nov 12 20:57:18.057210 systemd-logind[1670]: Session 23 logged out. Waiting for processes to exit. Nov 12 20:57:18.058681 systemd-logind[1670]: Removed session 23. Nov 12 20:57:23.163224 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:33324.service - OpenSSH per-connection server daemon (10.200.16.10:33324). Nov 12 20:57:23.784638 sshd[6250]: Accepted publickey for core from 10.200.16.10 port 33324 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:23.787405 sshd[6250]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:23.794038 systemd-logind[1670]: New session 24 of user core. Nov 12 20:57:23.799129 systemd[1]: Started session-24.scope - Session 24 of User core. Nov 12 20:57:24.325418 sshd[6250]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:24.329269 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:33324.service: Deactivated successfully. Nov 12 20:57:24.331658 systemd[1]: session-24.scope: Deactivated successfully. Nov 12 20:57:24.334018 systemd-logind[1670]: Session 24 logged out. Waiting for processes to exit. Nov 12 20:57:24.335224 systemd-logind[1670]: Removed session 24. Nov 12 20:57:29.438376 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:46890.service - OpenSSH per-connection server daemon (10.200.16.10:46890). Nov 12 20:57:29.607802 systemd[1]: run-containerd-runc-k8s.io-c4e0e7e19ef9dfdfe4f9f775136661fbde008ca49f3ff904f8708ff218962d2f-runc.Ycg5OU.mount: Deactivated successfully. Nov 12 20:57:30.069689 sshd[6263]: Accepted publickey for core from 10.200.16.10 port 46890 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:30.072170 sshd[6263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:30.080161 systemd-logind[1670]: New session 25 of user core. Nov 12 20:57:30.086844 systemd[1]: Started session-25.scope - Session 25 of User core. Nov 12 20:57:30.581466 sshd[6263]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:30.585735 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:46890.service: Deactivated successfully. Nov 12 20:57:30.587884 systemd[1]: session-25.scope: Deactivated successfully. Nov 12 20:57:30.589355 systemd-logind[1670]: Session 25 logged out. Waiting for processes to exit. Nov 12 20:57:30.590403 systemd-logind[1670]: Removed session 25. Nov 12 20:57:35.696271 systemd[1]: Started sshd@23-10.200.8.39:22-10.200.16.10:46904.service - OpenSSH per-connection server daemon (10.200.16.10:46904). Nov 12 20:57:36.316169 sshd[6312]: Accepted publickey for core from 10.200.16.10 port 46904 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:36.318412 sshd[6312]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:36.323324 systemd-logind[1670]: New session 26 of user core. Nov 12 20:57:36.333142 systemd[1]: Started session-26.scope - Session 26 of User core. Nov 12 20:57:36.827440 sshd[6312]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:36.830618 systemd[1]: sshd@23-10.200.8.39:22-10.200.16.10:46904.service: Deactivated successfully. Nov 12 20:57:36.832932 systemd[1]: session-26.scope: Deactivated successfully. Nov 12 20:57:36.834844 systemd-logind[1670]: Session 26 logged out. Waiting for processes to exit. Nov 12 20:57:36.835833 systemd-logind[1670]: Removed session 26. Nov 12 20:57:41.942307 systemd[1]: Started sshd@24-10.200.8.39:22-10.200.16.10:55732.service - OpenSSH per-connection server daemon (10.200.16.10:55732). Nov 12 20:57:42.571115 sshd[6328]: Accepted publickey for core from 10.200.16.10 port 55732 ssh2: RSA SHA256:BOQGsSBEpOrSnIwieA47uB4sHgMaBfv+65gLPUR/iV0 Nov 12 20:57:42.572768 sshd[6328]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Nov 12 20:57:42.577549 systemd-logind[1670]: New session 27 of user core. Nov 12 20:57:42.584125 systemd[1]: Started session-27.scope - Session 27 of User core. Nov 12 20:57:43.067443 sshd[6328]: pam_unix(sshd:session): session closed for user core Nov 12 20:57:43.071507 systemd[1]: sshd@24-10.200.8.39:22-10.200.16.10:55732.service: Deactivated successfully. Nov 12 20:57:43.073694 systemd[1]: session-27.scope: Deactivated successfully. Nov 12 20:57:43.074478 systemd-logind[1670]: Session 27 logged out. Waiting for processes to exit. Nov 12 20:57:43.075416 systemd-logind[1670]: Removed session 27.