Aug 5 22:10:58.053953 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:10:58.053989 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:10:58.054004 kernel: BIOS-provided physical RAM map: Aug 5 22:10:58.054015 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 5 22:10:58.054026 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 5 22:10:58.054037 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 5 22:10:58.054051 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 5 22:10:58.054065 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 5 22:10:58.054076 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 5 22:10:58.054088 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 5 22:10:58.054099 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 5 22:10:58.054110 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 5 22:10:58.054121 kernel: printk: bootconsole [earlyser0] enabled Aug 5 22:10:58.054133 kernel: NX (Execute Disable) protection: active Aug 5 22:10:58.054150 kernel: APIC: Static calls initialized Aug 5 22:10:58.054163 kernel: efi: EFI v2.7 by Microsoft Aug 5 22:10:58.054176 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Aug 5 22:10:58.054189 kernel: SMBIOS 3.1.0 present. Aug 5 22:10:58.054201 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 5 22:10:58.054214 kernel: Hypervisor detected: Microsoft Hyper-V Aug 5 22:10:58.054227 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 5 22:10:58.054239 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Aug 5 22:10:58.054252 kernel: Hyper-V: Nested features: 0x1e0101 Aug 5 22:10:58.054264 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 5 22:10:58.054279 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 5 22:10:58.054292 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:10:58.054305 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:10:58.054318 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 5 22:10:58.054332 kernel: tsc: Detected 2593.906 MHz processor Aug 5 22:10:58.054345 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:10:58.054358 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:10:58.054371 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 5 22:10:58.054384 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 5 22:10:58.054400 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:10:58.054413 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 5 22:10:58.054425 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 5 22:10:58.054438 kernel: Using GB pages for direct mapping Aug 5 22:10:58.054450 kernel: Secure boot disabled Aug 5 22:10:58.054463 kernel: ACPI: Early table checksum verification disabled Aug 5 22:10:58.054476 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 5 22:10:58.054495 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054510 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054524 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 5 22:10:58.054537 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 5 22:10:58.054551 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054565 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054579 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054596 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054610 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054623 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054637 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054651 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 5 22:10:58.054664 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 5 22:10:58.054711 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 5 22:10:58.054724 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 5 22:10:58.054741 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 5 22:10:58.054755 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 5 22:10:58.054768 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 5 22:10:58.054782 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 5 22:10:58.054795 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 5 22:10:58.054808 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 5 22:10:58.054820 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:10:58.054831 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:10:58.054844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 5 22:10:58.054860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 5 22:10:58.054872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 5 22:10:58.054885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 5 22:10:58.054899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 5 22:10:58.054912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 5 22:10:58.054926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 5 22:10:58.054939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 5 22:10:58.054952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 5 22:10:58.054965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 5 22:10:58.054981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 5 22:10:58.054995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 5 22:10:58.055008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 5 22:10:58.055022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 5 22:10:58.055034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 5 22:10:58.055046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 5 22:10:58.055058 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 5 22:10:58.055071 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 5 22:10:58.055085 kernel: Zone ranges: Aug 5 22:10:58.055102 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:10:58.055114 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 5 22:10:58.055127 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:10:58.055141 kernel: Movable zone start for each node Aug 5 22:10:58.055155 kernel: Early memory node ranges Aug 5 22:10:58.055168 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 5 22:10:58.055182 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 5 22:10:58.055196 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 5 22:10:58.055210 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:10:58.055226 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 5 22:10:58.055239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:10:58.055254 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 5 22:10:58.055267 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 5 22:10:58.055281 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 5 22:10:58.055295 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 5 22:10:58.055309 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:10:58.055322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:10:58.055336 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:10:58.055352 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 5 22:10:58.055366 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:10:58.055380 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 5 22:10:58.055394 kernel: Booting paravirtualized kernel on Hyper-V Aug 5 22:10:58.055408 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:10:58.055421 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:10:58.055435 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:10:58.055449 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:10:58.055462 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:10:58.055478 kernel: Hyper-V: PV spinlocks enabled Aug 5 22:10:58.055492 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:10:58.055507 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:10:58.055522 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:10:58.055536 kernel: random: crng init done Aug 5 22:10:58.055549 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 5 22:10:58.055563 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:10:58.055576 kernel: Fallback order for Node 0: 0 Aug 5 22:10:58.055593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 5 22:10:58.055616 kernel: Policy zone: Normal Aug 5 22:10:58.055631 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:10:58.055647 kernel: software IO TLB: area num 2. Aug 5 22:10:58.055662 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 316268K reserved, 0K cma-reserved) Aug 5 22:10:58.055717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:10:58.055732 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:10:58.055747 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:10:58.055761 kernel: Dynamic Preempt: voluntary Aug 5 22:10:58.055776 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:10:58.055792 kernel: rcu: RCU event tracing is enabled. Aug 5 22:10:58.055810 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:10:58.055824 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:10:58.055839 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:10:58.055854 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:10:58.055869 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:10:58.055886 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:10:58.055901 kernel: Using NULL legacy PIC Aug 5 22:10:58.055915 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 5 22:10:58.055930 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:10:58.055945 kernel: Console: colour dummy device 80x25 Aug 5 22:10:58.055959 kernel: printk: console [tty1] enabled Aug 5 22:10:58.055974 kernel: printk: console [ttyS0] enabled Aug 5 22:10:58.055989 kernel: printk: bootconsole [earlyser0] disabled Aug 5 22:10:58.056003 kernel: ACPI: Core revision 20230628 Aug 5 22:10:58.056018 kernel: Failed to register legacy timer interrupt Aug 5 22:10:58.056035 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:10:58.056050 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 5 22:10:58.056065 kernel: Hyper-V: Using IPI hypercalls Aug 5 22:10:58.056079 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 5 22:10:58.056094 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 5 22:10:58.056109 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 5 22:10:58.056124 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 5 22:10:58.056139 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 5 22:10:58.056153 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 5 22:10:58.056171 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 5 22:10:58.056186 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:10:58.056200 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:10:58.056215 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:10:58.056230 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:10:58.056244 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:10:58.056258 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:10:58.056273 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:10:58.056287 kernel: RETBleed: Vulnerable Aug 5 22:10:58.056304 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:10:58.056319 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:10:58.056333 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:10:58.056351 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:10:58.056365 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:10:58.056380 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:10:58.056394 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:10:58.056409 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:10:58.056423 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:10:58.056438 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:10:58.056453 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:10:58.056470 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 5 22:10:58.056484 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 5 22:10:58.056499 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 5 22:10:58.056513 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 5 22:10:58.056527 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:10:58.056541 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:10:58.056555 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:10:58.056570 kernel: SELinux: Initializing. Aug 5 22:10:58.056584 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.056599 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.056614 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:10:58.056628 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:10:58.056646 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:10:58.056661 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:10:58.057834 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:10:58.057855 kernel: signal: max sigframe size: 3632 Aug 5 22:10:58.057866 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:10:58.057877 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:10:58.057888 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:10:58.057899 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:10:58.057908 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:10:58.057921 kernel: .... node #0, CPUs: #1 Aug 5 22:10:58.057932 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 5 22:10:58.057941 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:10:58.057952 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:10:58.057960 kernel: smpboot: Max logical packages: 1 Aug 5 22:10:58.057971 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 5 22:10:58.057979 kernel: devtmpfs: initialized Aug 5 22:10:58.057987 kernel: x86/mm: Memory block size: 128MB Aug 5 22:10:58.058000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 5 22:10:58.058008 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:10:58.058018 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:10:58.058027 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:10:58.058036 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:10:58.058046 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:10:58.058054 kernel: audit: type=2000 audit(1722895856.027:1): state=initialized audit_enabled=0 res=1 Aug 5 22:10:58.058064 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:10:58.058072 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:10:58.058085 kernel: cpuidle: using governor menu Aug 5 22:10:58.058093 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:10:58.058101 kernel: dca service started, version 1.12.1 Aug 5 22:10:58.058109 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 5 22:10:58.058119 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:10:58.058128 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:10:58.058139 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:10:58.058147 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:10:58.058157 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:10:58.058167 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:10:58.058178 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:10:58.058186 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:10:58.058196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:10:58.058205 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:10:58.058214 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:10:58.058224 kernel: ACPI: Interpreter enabled Aug 5 22:10:58.058235 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:10:58.058245 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:10:58.058257 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:10:58.058267 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 5 22:10:58.058278 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 5 22:10:58.058288 kernel: iommu: Default domain type: Translated Aug 5 22:10:58.058296 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:10:58.058307 kernel: efivars: Registered efivars operations Aug 5 22:10:58.058315 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:10:58.058326 kernel: PCI: System does not support PCI Aug 5 22:10:58.058334 kernel: vgaarb: loaded Aug 5 22:10:58.058346 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 5 22:10:58.058354 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:10:58.058365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:10:58.058373 kernel: pnp: PnP ACPI init Aug 5 22:10:58.058384 kernel: pnp: PnP ACPI: found 3 devices Aug 5 22:10:58.058392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:10:58.058403 kernel: NET: Registered PF_INET protocol family Aug 5 22:10:58.058411 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:10:58.058420 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 5 22:10:58.058432 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:10:58.058440 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:10:58.058451 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 5 22:10:58.058459 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 5 22:10:58.058470 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.058478 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.058489 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:10:58.058497 kernel: NET: Registered PF_XDP protocol family Aug 5 22:10:58.058508 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:10:58.058518 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 5 22:10:58.058529 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Aug 5 22:10:58.058537 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:10:58.058548 kernel: Initialise system trusted keyrings Aug 5 22:10:58.058556 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 5 22:10:58.058566 kernel: Key type asymmetric registered Aug 5 22:10:58.058574 kernel: Asymmetric key parser 'x509' registered Aug 5 22:10:58.058584 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:10:58.058593 kernel: io scheduler mq-deadline registered Aug 5 22:10:58.058606 kernel: io scheduler kyber registered Aug 5 22:10:58.058614 kernel: io scheduler bfq registered Aug 5 22:10:58.058625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:10:58.058636 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:10:58.058646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:10:58.058658 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 5 22:10:58.058666 kernel: i8042: PNP: No PS/2 controller found. Aug 5 22:10:58.058821 kernel: rtc_cmos 00:02: registered as rtc0 Aug 5 22:10:58.058918 kernel: rtc_cmos 00:02: setting system clock to 2024-08-05T22:10:57 UTC (1722895857) Aug 5 22:10:58.059006 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 5 22:10:58.059019 kernel: intel_pstate: CPU model not supported Aug 5 22:10:58.059030 kernel: efifb: probing for efifb Aug 5 22:10:58.059039 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 5 22:10:58.059049 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 5 22:10:58.059057 kernel: efifb: scrolling: redraw Aug 5 22:10:58.059067 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 5 22:10:58.059080 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:10:58.059089 kernel: fb0: EFI VGA frame buffer device Aug 5 22:10:58.059099 kernel: pstore: Using crash dump compression: deflate Aug 5 22:10:58.059107 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:10:58.059118 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:10:58.059126 kernel: Segment Routing with IPv6 Aug 5 22:10:58.059136 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:10:58.059144 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:10:58.059155 kernel: Key type dns_resolver registered Aug 5 22:10:58.059163 kernel: IPI shorthand broadcast: enabled Aug 5 22:10:58.059176 kernel: sched_clock: Marking stable (786008400, 41631400)->(1012191400, -184551600) Aug 5 22:10:58.059184 kernel: registered taskstats version 1 Aug 5 22:10:58.059195 kernel: Loading compiled-in X.509 certificates Aug 5 22:10:58.059203 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:10:58.059214 kernel: Key type .fscrypt registered Aug 5 22:10:58.059222 kernel: Key type fscrypt-provisioning registered Aug 5 22:10:58.059232 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:10:58.059241 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:10:58.059254 kernel: ima: No architecture policies found Aug 5 22:10:58.059262 kernel: clk: Disabling unused clocks Aug 5 22:10:58.059272 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:10:58.059281 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:10:58.059289 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:10:58.059300 kernel: Run /init as init process Aug 5 22:10:58.059310 kernel: with arguments: Aug 5 22:10:58.059321 kernel: /init Aug 5 22:10:58.059331 kernel: with environment: Aug 5 22:10:58.059347 kernel: HOME=/ Aug 5 22:10:58.059360 kernel: TERM=linux Aug 5 22:10:58.059374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:10:58.059391 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:10:58.059408 systemd[1]: Detected virtualization microsoft. Aug 5 22:10:58.059423 systemd[1]: Detected architecture x86-64. Aug 5 22:10:58.059438 systemd[1]: Running in initrd. Aug 5 22:10:58.059452 systemd[1]: No hostname configured, using default hostname. Aug 5 22:10:58.059469 systemd[1]: Hostname set to . Aug 5 22:10:58.059483 systemd[1]: Initializing machine ID from random generator. Aug 5 22:10:58.059497 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:10:58.059511 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:10:58.059526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:10:58.059542 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:10:58.059557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:10:58.059572 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:10:58.059590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:10:58.059607 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:10:58.059621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:10:58.059638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:10:58.059654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:10:58.059669 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:10:58.063154 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:10:58.063180 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:10:58.063196 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:10:58.063211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:10:58.063226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:10:58.063242 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:10:58.063257 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:10:58.063272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:10:58.063287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:10:58.063305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:10:58.063320 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:10:58.063335 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:10:58.063351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:10:58.063366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:10:58.063381 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:10:58.063396 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:10:58.063411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:10:58.063426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:58.063470 systemd-journald[176]: Collecting audit messages is disabled. Aug 5 22:10:58.063502 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:10:58.063518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:10:58.063533 systemd-journald[176]: Journal started Aug 5 22:10:58.063570 systemd-journald[176]: Runtime Journal (/run/log/journal/52c48208ceeb4115a89b3037c3ecbe67) is 8.0M, max 158.8M, 150.8M free. Aug 5 22:10:58.068208 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:10:58.068806 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:10:58.077319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:10:58.085832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:10:58.085907 systemd-modules-load[177]: Inserted module 'overlay' Aug 5 22:10:58.093487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:58.116812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:10:58.129069 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:10:58.132052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:10:58.140031 kernel: Bridge firewalling registered Aug 5 22:10:58.133802 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 5 22:10:58.145792 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:10:58.150446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:10:58.150635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:10:58.154808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:10:58.175275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:10:58.183906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:10:58.184845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:10:58.191769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:10:58.198839 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:10:58.223940 dracut-cmdline[214]: dracut-dracut-053 Aug 5 22:10:58.228055 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:10:58.246011 systemd-resolved[205]: Positive Trust Anchors: Aug 5 22:10:58.246028 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:10:58.246081 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:10:58.252798 systemd-resolved[205]: Defaulting to hostname 'linux'. Aug 5 22:10:58.253709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:10:58.267933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:10:58.323697 kernel: SCSI subsystem initialized Aug 5 22:10:58.334695 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:10:58.347699 kernel: iscsi: registered transport (tcp) Aug 5 22:10:58.373015 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:10:58.373084 kernel: QLogic iSCSI HBA Driver Aug 5 22:10:58.407946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:10:58.420007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:10:58.452096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:10:58.452168 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:10:58.455519 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:10:58.499705 kernel: raid6: avx512x4 gen() 18306 MB/s Aug 5 22:10:58.518694 kernel: raid6: avx512x2 gen() 18448 MB/s Aug 5 22:10:58.536695 kernel: raid6: avx512x1 gen() 18509 MB/s Aug 5 22:10:58.557695 kernel: raid6: avx2x4 gen() 18262 MB/s Aug 5 22:10:58.576687 kernel: raid6: avx2x2 gen() 18503 MB/s Aug 5 22:10:58.596517 kernel: raid6: avx2x1 gen() 13803 MB/s Aug 5 22:10:58.596564 kernel: raid6: using algorithm avx512x1 gen() 18509 MB/s Aug 5 22:10:58.617942 kernel: raid6: .... xor() 27109 MB/s, rmw enabled Aug 5 22:10:58.617974 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:10:58.643699 kernel: xor: automatically using best checksumming function avx Aug 5 22:10:58.808708 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:10:58.818455 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:10:58.825852 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:10:58.850798 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 5 22:10:58.855144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:10:58.867897 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:10:58.881215 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 5 22:10:58.907290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:10:58.939099 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:10:58.981169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:10:58.994872 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:10:59.026455 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:10:59.035172 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:10:59.041247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:10:59.046641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:10:59.057887 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:10:59.073210 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:10:59.077126 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:10:59.092048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:10:59.092185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:10:59.097500 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:10:59.103002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:10:59.103312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:59.105851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:59.123840 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:10:59.123863 kernel: AES CTR mode by8 optimization enabled Aug 5 22:10:59.127343 kernel: hv_vmbus: Vmbus version:5.2 Aug 5 22:10:59.128652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:59.142675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:10:59.142839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:59.173825 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 5 22:10:59.173859 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 5 22:10:59.169886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:59.196799 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 5 22:10:59.196839 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 5 22:10:59.197741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:59.216737 kernel: hv_vmbus: registering driver hv_netvsc Aug 5 22:10:59.215824 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:10:59.228261 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:10:59.243957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:10:59.255689 kernel: hv_vmbus: registering driver hid_hyperv Aug 5 22:10:59.259862 kernel: PTP clock support registered Aug 5 22:10:59.259898 kernel: hv_vmbus: registering driver hv_storvsc Aug 5 22:10:59.275419 kernel: hv_utils: Registering HyperV Utility Driver Aug 5 22:10:59.275488 kernel: hv_vmbus: registering driver hv_utils Aug 5 22:10:59.279971 kernel: scsi host1: storvsc_host_t Aug 5 22:10:59.280154 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 5 22:10:59.280167 kernel: hv_utils: Heartbeat IC version 3.0 Aug 5 22:10:59.284475 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 5 22:10:59.284650 kernel: hv_utils: Shutdown IC version 3.2 Aug 5 22:10:59.289531 kernel: scsi host0: storvsc_host_t Aug 5 22:10:59.291569 kernel: hv_utils: TimeSync IC version 4.0 Aug 5 22:10:59.130668 systemd-resolved[205]: Clock change detected. Flushing caches. Aug 5 22:10:59.143427 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 5 22:10:59.143635 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 5 22:10:59.144841 systemd-journald[176]: Time jumped backwards, rotating. Aug 5 22:10:59.162524 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 5 22:10:59.163707 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:10:59.163736 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 5 22:10:59.177599 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 5 22:10:59.190922 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 5 22:10:59.191091 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 5 22:10:59.191237 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 5 22:10:59.191393 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 5 22:10:59.191570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:10:59.191586 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 5 22:10:59.207817 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: VF slot 1 added Aug 5 22:10:59.218800 kernel: hv_vmbus: registering driver hv_pci Aug 5 22:10:59.222821 kernel: hv_pci e7b1e3f9-cce6-41f0-9d91-181a10196386: PCI VMBus probing: Using version 0x10004 Aug 5 22:10:59.261522 kernel: hv_pci e7b1e3f9-cce6-41f0-9d91-181a10196386: PCI host bridge to bus cce6:00 Aug 5 22:10:59.261981 kernel: pci_bus cce6:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 5 22:10:59.262173 kernel: pci_bus cce6:00: No busn resource found for root bus, will use [bus 00-ff] Aug 5 22:10:59.262333 kernel: pci cce6:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 5 22:10:59.262537 kernel: pci cce6:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:10:59.262714 kernel: pci cce6:00:02.0: enabling Extended Tags Aug 5 22:10:59.262908 kernel: pci cce6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cce6:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 5 22:10:59.263077 kernel: pci_bus cce6:00: busn_res: [bus 00-ff] end is updated to 00 Aug 5 22:10:59.263459 kernel: pci cce6:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:10:59.443802 kernel: mlx5_core cce6:00:02.0: enabling device (0000 -> 0002) Aug 5 22:10:59.684983 kernel: mlx5_core cce6:00:02.0: firmware version: 14.30.1284 Aug 5 22:10:59.685194 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Aug 5 22:10:59.685213 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Aug 5 22:10:59.685229 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: VF registering: eth1 Aug 5 22:10:59.685383 kernel: mlx5_core cce6:00:02.0 eth1: joined to eth0 Aug 5 22:10:59.685550 kernel: mlx5_core cce6:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 5 22:10:59.593171 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 5 22:10:59.672466 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 5 22:10:59.695822 kernel: mlx5_core cce6:00:02.0 enP52454s1: renamed from eth1 Aug 5 22:10:59.696133 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 22:10:59.709405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 5 22:10:59.712542 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 5 22:10:59.730918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:10:59.741826 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:10:59.748815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:11:00.756614 disk-uuid[605]: The operation has completed successfully. Aug 5 22:11:00.761901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:11:00.835352 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:11:00.835520 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:11:00.860917 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:11:00.866570 sh[691]: Success Aug 5 22:11:00.894818 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:11:01.054079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:11:01.069926 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:11:01.074139 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:11:01.089795 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:11:01.089832 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:01.094626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:11:01.097244 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:11:01.099446 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:11:01.302203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:11:01.305435 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:11:01.317257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:11:01.322977 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:11:01.341971 kernel: BTRFS info (device sda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:01.342017 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:01.342036 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:11:01.372807 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:11:01.386729 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:11:01.388829 kernel: BTRFS info (device sda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:01.396863 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:11:01.407947 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:11:01.423927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:11:01.433979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:11:01.455274 systemd-networkd[875]: lo: Link UP Aug 5 22:11:01.455283 systemd-networkd[875]: lo: Gained carrier Aug 5 22:11:01.460038 systemd-networkd[875]: Enumeration completed Aug 5 22:11:01.461217 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:11:01.461736 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:01.461740 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:11:01.466864 systemd[1]: Reached target network.target - Network. Aug 5 22:11:01.521806 kernel: mlx5_core cce6:00:02.0 enP52454s1: Link up Aug 5 22:11:01.559808 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: Data path switched to VF: enP52454s1 Aug 5 22:11:01.560498 systemd-networkd[875]: enP52454s1: Link UP Aug 5 22:11:01.560633 systemd-networkd[875]: eth0: Link UP Aug 5 22:11:01.560818 systemd-networkd[875]: eth0: Gained carrier Aug 5 22:11:01.560832 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:01.565768 systemd-networkd[875]: enP52454s1: Gained carrier Aug 5 22:11:01.590830 systemd-networkd[875]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 5 22:11:02.083772 ignition[856]: Ignition 2.18.0 Aug 5 22:11:02.083803 ignition[856]: Stage: fetch-offline Aug 5 22:11:02.085394 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:11:02.083865 ignition[856]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.083877 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.084057 ignition[856]: parsed url from cmdline: "" Aug 5 22:11:02.084062 ignition[856]: no config URL provided Aug 5 22:11:02.098896 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:11:02.084070 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:11:02.084081 ignition[856]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:11:02.084087 ignition[856]: failed to fetch config: resource requires networking Aug 5 22:11:02.084314 ignition[856]: Ignition finished successfully Aug 5 22:11:02.111222 ignition[887]: Ignition 2.18.0 Aug 5 22:11:02.111229 ignition[887]: Stage: fetch Aug 5 22:11:02.111397 ignition[887]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.111408 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.111512 ignition[887]: parsed url from cmdline: "" Aug 5 22:11:02.111515 ignition[887]: no config URL provided Aug 5 22:11:02.111520 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:11:02.111529 ignition[887]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:11:02.111558 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 5 22:11:02.196861 ignition[887]: GET result: OK Aug 5 22:11:02.197064 ignition[887]: config has been read from IMDS userdata Aug 5 22:11:02.197105 ignition[887]: parsing config with SHA512: 3ffebe6164877bbb8f014036ac2d78838f5aa167dff435e6b1319e7a25358c54a61fdd0aa40584f23bacdf4eb7eb8bcac045992154d421bce4d8b50e902d5287 Aug 5 22:11:02.202608 unknown[887]: fetched base config from "system" Aug 5 22:11:02.202629 unknown[887]: fetched base config from "system" Aug 5 22:11:02.203829 ignition[887]: fetch: fetch complete Aug 5 22:11:02.202638 unknown[887]: fetched user config from "azure" Aug 5 22:11:02.203839 ignition[887]: fetch: fetch passed Aug 5 22:11:02.205919 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:11:02.203909 ignition[887]: Ignition finished successfully Aug 5 22:11:02.219009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:11:02.234752 ignition[895]: Ignition 2.18.0 Aug 5 22:11:02.234763 ignition[895]: Stage: kargs Aug 5 22:11:02.235016 ignition[895]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.238001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:11:02.235029 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.235933 ignition[895]: kargs: kargs passed Aug 5 22:11:02.235988 ignition[895]: Ignition finished successfully Aug 5 22:11:02.248291 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:11:02.266405 ignition[902]: Ignition 2.18.0 Aug 5 22:11:02.266415 ignition[902]: Stage: disks Aug 5 22:11:02.268298 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:11:02.266614 ignition[902]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.272137 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:11:02.266629 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.276445 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:11:02.267491 ignition[902]: disks: disks passed Aug 5 22:11:02.279181 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:11:02.267530 ignition[902]: Ignition finished successfully Aug 5 22:11:02.283204 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:11:02.285465 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:11:02.300667 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:11:02.342202 systemd-fsck[911]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 5 22:11:02.347336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:11:02.356930 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:11:02.461802 kernel: EXT4-fs (sda9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:11:02.461861 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:11:02.466103 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:11:02.497982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:11:02.506077 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:11:02.517034 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (922) Aug 5 22:11:02.522816 kernel: BTRFS info (device sda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:02.522867 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:02.524736 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:11:02.530670 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:11:02.529154 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 5 22:11:02.534204 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:11:02.534245 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:11:02.538699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:11:02.544045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:11:02.553551 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:11:02.720008 systemd-networkd[875]: enP52454s1: Gained IPv6LL Aug 5 22:11:02.960454 coreos-metadata[924]: Aug 05 22:11:02.960 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 22:11:02.970879 coreos-metadata[924]: Aug 05 22:11:02.970 INFO Fetch successful Aug 5 22:11:02.973209 coreos-metadata[924]: Aug 05 22:11:02.970 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 5 22:11:02.984013 coreos-metadata[924]: Aug 05 22:11:02.983 INFO Fetch successful Aug 5 22:11:02.995846 coreos-metadata[924]: Aug 05 22:11:02.995 INFO wrote hostname ci-3975.2.0-a-9e76a2f9cc to /sysroot/etc/hostname Aug 5 22:11:02.997518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:11:03.080220 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:11:03.108939 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:11:03.114903 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:11:03.119609 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:11:03.615948 systemd-networkd[875]: eth0: Gained IPv6LL Aug 5 22:11:03.626813 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:11:03.634882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:11:03.640956 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:11:03.648025 kernel: BTRFS info (device sda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:03.649433 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:11:03.682938 ignition[1041]: INFO : Ignition 2.18.0 Aug 5 22:11:03.682938 ignition[1041]: INFO : Stage: mount Aug 5 22:11:03.682938 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:03.682938 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:03.682938 ignition[1041]: INFO : mount: mount passed Aug 5 22:11:03.682938 ignition[1041]: INFO : Ignition finished successfully Aug 5 22:11:03.683807 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:11:03.687472 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:11:03.703372 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:11:03.710577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:11:03.727733 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1053) Aug 5 22:11:03.727792 kernel: BTRFS info (device sda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:03.730751 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:03.732927 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:11:03.737812 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:11:03.738947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:11:03.761166 ignition[1070]: INFO : Ignition 2.18.0 Aug 5 22:11:03.761166 ignition[1070]: INFO : Stage: files Aug 5 22:11:03.765605 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:03.765605 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:03.765605 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:11:03.765605 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:11:03.765605 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:11:03.824923 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:11:03.829020 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:11:03.829020 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:11:03.825479 unknown[1070]: wrote ssh authorized keys file for user: core Aug 5 22:11:03.858700 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:11:03.863236 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:11:04.540606 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:11:04.680545 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:11:04.685587 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:11:04.685587 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:11:04.685587 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:11:04.697546 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:11:04.697546 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:11:04.705418 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:11:04.705418 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:11:04.713404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:11:04.717558 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:11:04.721572 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:11:04.725536 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:04.731253 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:04.736774 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:04.736774 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:11:05.321750 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:11:06.318754 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:06.318754 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:11:06.345605 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:11:06.350406 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:11:06.350406 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:11:06.357258 ignition[1070]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:11:06.357258 ignition[1070]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:11:06.363659 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:11:06.367608 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:11:06.371454 ignition[1070]: INFO : files: files passed Aug 5 22:11:06.373194 ignition[1070]: INFO : Ignition finished successfully Aug 5 22:11:06.374003 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:11:06.383935 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:11:06.389844 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:11:06.395904 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:11:06.396543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:11:06.413406 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:06.413406 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:06.420715 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:06.425001 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:11:06.430745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:11:06.437917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:11:06.460561 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:11:06.460665 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:11:06.466635 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:11:06.472232 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:11:06.479192 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:11:06.489334 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:11:06.500873 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:11:06.508927 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:11:06.518555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:11:06.519621 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:11:06.520435 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:11:06.520772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:11:06.520913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:11:06.521594 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:11:06.522025 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:11:06.522375 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:11:06.522762 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:11:06.523765 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:11:06.524167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:11:06.524545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:11:06.524954 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:11:06.525339 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:11:06.525720 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:11:06.526077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:11:06.526205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:11:06.526957 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:11:06.527372 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:11:06.527721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:11:06.561716 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:11:06.565413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:11:06.565523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:11:06.575931 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:11:06.582348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:11:06.622319 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:11:06.622496 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:11:06.626681 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 5 22:11:06.626837 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:11:06.641960 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:11:06.654985 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:11:06.657150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:11:06.657314 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:11:06.662012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:11:06.662153 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:11:06.678928 ignition[1123]: INFO : Ignition 2.18.0 Aug 5 22:11:06.678928 ignition[1123]: INFO : Stage: umount Aug 5 22:11:06.678928 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:06.678928 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:06.677309 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:11:06.691432 ignition[1123]: INFO : umount: umount passed Aug 5 22:11:06.691432 ignition[1123]: INFO : Ignition finished successfully Aug 5 22:11:06.677429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:11:06.691548 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:11:06.691639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:11:06.697548 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:11:06.697607 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:11:06.702331 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:11:06.702380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:11:06.706630 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:11:06.706679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:11:06.709700 systemd[1]: Stopped target network.target - Network. Aug 5 22:11:06.710408 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:11:06.710453 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:11:06.710820 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:11:06.711144 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:11:06.736491 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:11:06.739234 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:11:06.741122 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:11:06.743149 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:11:06.743194 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:11:06.744220 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:11:06.744256 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:11:06.744499 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:11:06.744539 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:11:06.745246 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:11:06.745275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:11:06.745844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:11:06.746141 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:11:06.747601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:11:06.748105 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:11:06.748194 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:11:06.748922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:11:06.749026 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:11:06.770729 systemd-networkd[875]: eth0: DHCPv6 lease lost Aug 5 22:11:06.774246 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:11:06.774420 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:11:06.780134 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:11:06.780208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:11:06.809691 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:11:06.809875 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:11:06.814699 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:11:06.814977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:11:06.828639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:11:06.830754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:11:06.833000 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:11:06.837845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:11:06.840212 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:11:06.844751 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:11:06.844813 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:11:06.853638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:11:06.872351 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:11:06.872518 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:11:06.877285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:11:06.877326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:11:06.898430 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:11:06.898470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:11:06.905236 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:11:06.907433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:11:06.914431 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:11:06.914473 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:11:06.922371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:11:06.929860 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: Data path switched from VF: enP52454s1 Aug 5 22:11:06.922432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:11:06.937943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:11:06.941829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:11:06.941882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:11:06.945409 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:11:06.945453 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:11:06.950532 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:11:06.950583 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:11:06.956100 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:11:06.956152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:06.964693 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:11:06.964810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:11:06.972006 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:11:06.972099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:11:06.977241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:11:06.991330 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:11:07.012626 systemd[1]: Switching root. Aug 5 22:11:07.086280 systemd-journald[176]: Journal stopped Aug 5 22:10:58.053953 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:27 -00 2024 Aug 5 22:10:58.053989 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:10:58.054004 kernel: BIOS-provided physical RAM map: Aug 5 22:10:58.054015 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 5 22:10:58.054026 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 5 22:10:58.054037 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 5 22:10:58.054051 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 5 22:10:58.054065 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 5 22:10:58.054076 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 5 22:10:58.054088 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 5 22:10:58.054099 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 5 22:10:58.054110 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 5 22:10:58.054121 kernel: printk: bootconsole [earlyser0] enabled Aug 5 22:10:58.054133 kernel: NX (Execute Disable) protection: active Aug 5 22:10:58.054150 kernel: APIC: Static calls initialized Aug 5 22:10:58.054163 kernel: efi: EFI v2.7 by Microsoft Aug 5 22:10:58.054176 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3ee83a98 Aug 5 22:10:58.054189 kernel: SMBIOS 3.1.0 present. Aug 5 22:10:58.054201 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 5 22:10:58.054214 kernel: Hypervisor detected: Microsoft Hyper-V Aug 5 22:10:58.054227 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 5 22:10:58.054239 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Aug 5 22:10:58.054252 kernel: Hyper-V: Nested features: 0x1e0101 Aug 5 22:10:58.054264 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 5 22:10:58.054279 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 5 22:10:58.054292 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:10:58.054305 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:10:58.054318 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 5 22:10:58.054332 kernel: tsc: Detected 2593.906 MHz processor Aug 5 22:10:58.054345 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:10:58.054358 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:10:58.054371 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 5 22:10:58.054384 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 5 22:10:58.054400 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:10:58.054413 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 5 22:10:58.054425 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 5 22:10:58.054438 kernel: Using GB pages for direct mapping Aug 5 22:10:58.054450 kernel: Secure boot disabled Aug 5 22:10:58.054463 kernel: ACPI: Early table checksum verification disabled Aug 5 22:10:58.054476 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 5 22:10:58.054495 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054510 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054524 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 5 22:10:58.054537 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 5 22:10:58.054551 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054565 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054579 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054596 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054610 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054623 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054637 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:10:58.054651 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 5 22:10:58.054664 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 5 22:10:58.054711 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 5 22:10:58.054724 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 5 22:10:58.054741 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 5 22:10:58.054755 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 5 22:10:58.054768 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 5 22:10:58.054782 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 5 22:10:58.054795 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 5 22:10:58.054808 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 5 22:10:58.054820 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:10:58.054831 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:10:58.054844 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 5 22:10:58.054860 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 5 22:10:58.054872 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 5 22:10:58.054885 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 5 22:10:58.054899 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 5 22:10:58.054912 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 5 22:10:58.054926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 5 22:10:58.054939 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 5 22:10:58.054952 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 5 22:10:58.054965 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 5 22:10:58.054981 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 5 22:10:58.054995 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 5 22:10:58.055008 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 5 22:10:58.055022 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 5 22:10:58.055034 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 5 22:10:58.055046 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 5 22:10:58.055058 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 5 22:10:58.055071 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 5 22:10:58.055085 kernel: Zone ranges: Aug 5 22:10:58.055102 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:10:58.055114 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 5 22:10:58.055127 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:10:58.055141 kernel: Movable zone start for each node Aug 5 22:10:58.055155 kernel: Early memory node ranges Aug 5 22:10:58.055168 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 5 22:10:58.055182 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 5 22:10:58.055196 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 5 22:10:58.055210 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:10:58.055226 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 5 22:10:58.055239 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:10:58.055254 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 5 22:10:58.055267 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 5 22:10:58.055281 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 5 22:10:58.055295 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 5 22:10:58.055309 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:10:58.055322 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:10:58.055336 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:10:58.055352 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 5 22:10:58.055366 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:10:58.055380 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 5 22:10:58.055394 kernel: Booting paravirtualized kernel on Hyper-V Aug 5 22:10:58.055408 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:10:58.055421 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:10:58.055435 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:10:58.055449 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:10:58.055462 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:10:58.055478 kernel: Hyper-V: PV spinlocks enabled Aug 5 22:10:58.055492 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:10:58.055507 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:10:58.055522 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:10:58.055536 kernel: random: crng init done Aug 5 22:10:58.055549 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 5 22:10:58.055563 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:10:58.055576 kernel: Fallback order for Node 0: 0 Aug 5 22:10:58.055593 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 5 22:10:58.055616 kernel: Policy zone: Normal Aug 5 22:10:58.055631 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:10:58.055647 kernel: software IO TLB: area num 2. Aug 5 22:10:58.055662 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49328K init, 2016K bss, 316268K reserved, 0K cma-reserved) Aug 5 22:10:58.055717 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:10:58.055732 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:10:58.055747 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:10:58.055761 kernel: Dynamic Preempt: voluntary Aug 5 22:10:58.055776 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:10:58.055792 kernel: rcu: RCU event tracing is enabled. Aug 5 22:10:58.055810 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:10:58.055824 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:10:58.055839 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:10:58.055854 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:10:58.055869 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:10:58.055886 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:10:58.055901 kernel: Using NULL legacy PIC Aug 5 22:10:58.055915 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 5 22:10:58.055930 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:10:58.055945 kernel: Console: colour dummy device 80x25 Aug 5 22:10:58.055959 kernel: printk: console [tty1] enabled Aug 5 22:10:58.055974 kernel: printk: console [ttyS0] enabled Aug 5 22:10:58.055989 kernel: printk: bootconsole [earlyser0] disabled Aug 5 22:10:58.056003 kernel: ACPI: Core revision 20230628 Aug 5 22:10:58.056018 kernel: Failed to register legacy timer interrupt Aug 5 22:10:58.056035 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:10:58.056050 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 5 22:10:58.056065 kernel: Hyper-V: Using IPI hypercalls Aug 5 22:10:58.056079 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 5 22:10:58.056094 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 5 22:10:58.056109 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 5 22:10:58.056124 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 5 22:10:58.056139 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 5 22:10:58.056153 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 5 22:10:58.056171 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 5 22:10:58.056186 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:10:58.056200 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:10:58.056215 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:10:58.056230 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:10:58.056244 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:10:58.056258 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:10:58.056273 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:10:58.056287 kernel: RETBleed: Vulnerable Aug 5 22:10:58.056304 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:10:58.056319 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:10:58.056333 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:10:58.056351 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:10:58.056365 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:10:58.056380 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:10:58.056394 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:10:58.056409 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:10:58.056423 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:10:58.056438 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:10:58.056453 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:10:58.056470 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 5 22:10:58.056484 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 5 22:10:58.056499 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 5 22:10:58.056513 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 5 22:10:58.056527 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:10:58.056541 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:10:58.056555 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:10:58.056570 kernel: SELinux: Initializing. Aug 5 22:10:58.056584 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.056599 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.056614 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:10:58.056628 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:10:58.056646 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:10:58.056661 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:10:58.057834 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:10:58.057855 kernel: signal: max sigframe size: 3632 Aug 5 22:10:58.057866 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:10:58.057877 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:10:58.057888 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:10:58.057899 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:10:58.057908 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:10:58.057921 kernel: .... node #0, CPUs: #1 Aug 5 22:10:58.057932 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 5 22:10:58.057941 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:10:58.057952 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:10:58.057960 kernel: smpboot: Max logical packages: 1 Aug 5 22:10:58.057971 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 5 22:10:58.057979 kernel: devtmpfs: initialized Aug 5 22:10:58.057987 kernel: x86/mm: Memory block size: 128MB Aug 5 22:10:58.058000 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 5 22:10:58.058008 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:10:58.058018 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:10:58.058027 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:10:58.058036 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:10:58.058046 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:10:58.058054 kernel: audit: type=2000 audit(1722895856.027:1): state=initialized audit_enabled=0 res=1 Aug 5 22:10:58.058064 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:10:58.058072 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:10:58.058085 kernel: cpuidle: using governor menu Aug 5 22:10:58.058093 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:10:58.058101 kernel: dca service started, version 1.12.1 Aug 5 22:10:58.058109 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 5 22:10:58.058119 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:10:58.058128 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:10:58.058139 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:10:58.058147 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:10:58.058157 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:10:58.058167 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:10:58.058178 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:10:58.058186 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:10:58.058196 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:10:58.058205 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:10:58.058214 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:10:58.058224 kernel: ACPI: Interpreter enabled Aug 5 22:10:58.058235 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:10:58.058245 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:10:58.058257 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:10:58.058267 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 5 22:10:58.058278 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 5 22:10:58.058288 kernel: iommu: Default domain type: Translated Aug 5 22:10:58.058296 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:10:58.058307 kernel: efivars: Registered efivars operations Aug 5 22:10:58.058315 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:10:58.058326 kernel: PCI: System does not support PCI Aug 5 22:10:58.058334 kernel: vgaarb: loaded Aug 5 22:10:58.058346 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 5 22:10:58.058354 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:10:58.058365 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:10:58.058373 kernel: pnp: PnP ACPI init Aug 5 22:10:58.058384 kernel: pnp: PnP ACPI: found 3 devices Aug 5 22:10:58.058392 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:10:58.058403 kernel: NET: Registered PF_INET protocol family Aug 5 22:10:58.058411 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:10:58.058420 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 5 22:10:58.058432 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:10:58.058440 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:10:58.058451 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 5 22:10:58.058459 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 5 22:10:58.058470 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.058478 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:10:58.058489 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:10:58.058497 kernel: NET: Registered PF_XDP protocol family Aug 5 22:10:58.058508 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:10:58.058518 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 5 22:10:58.058529 kernel: software IO TLB: mapped [mem 0x000000003ae83000-0x000000003ee83000] (64MB) Aug 5 22:10:58.058537 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:10:58.058548 kernel: Initialise system trusted keyrings Aug 5 22:10:58.058556 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 5 22:10:58.058566 kernel: Key type asymmetric registered Aug 5 22:10:58.058574 kernel: Asymmetric key parser 'x509' registered Aug 5 22:10:58.058584 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:10:58.058593 kernel: io scheduler mq-deadline registered Aug 5 22:10:58.058606 kernel: io scheduler kyber registered Aug 5 22:10:58.058614 kernel: io scheduler bfq registered Aug 5 22:10:58.058625 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:10:58.058636 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:10:58.058646 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:10:58.058658 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 5 22:10:58.058666 kernel: i8042: PNP: No PS/2 controller found. Aug 5 22:10:58.058821 kernel: rtc_cmos 00:02: registered as rtc0 Aug 5 22:10:58.058918 kernel: rtc_cmos 00:02: setting system clock to 2024-08-05T22:10:57 UTC (1722895857) Aug 5 22:10:58.059006 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 5 22:10:58.059019 kernel: intel_pstate: CPU model not supported Aug 5 22:10:58.059030 kernel: efifb: probing for efifb Aug 5 22:10:58.059039 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 5 22:10:58.059049 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 5 22:10:58.059057 kernel: efifb: scrolling: redraw Aug 5 22:10:58.059067 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 5 22:10:58.059080 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:10:58.059089 kernel: fb0: EFI VGA frame buffer device Aug 5 22:10:58.059099 kernel: pstore: Using crash dump compression: deflate Aug 5 22:10:58.059107 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:10:58.059118 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:10:58.059126 kernel: Segment Routing with IPv6 Aug 5 22:10:58.059136 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:10:58.059144 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:10:58.059155 kernel: Key type dns_resolver registered Aug 5 22:10:58.059163 kernel: IPI shorthand broadcast: enabled Aug 5 22:10:58.059176 kernel: sched_clock: Marking stable (786008400, 41631400)->(1012191400, -184551600) Aug 5 22:10:58.059184 kernel: registered taskstats version 1 Aug 5 22:10:58.059195 kernel: Loading compiled-in X.509 certificates Aug 5 22:10:58.059203 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: e31e857530e65c19b206dbf3ab8297cc37ac5d55' Aug 5 22:10:58.059214 kernel: Key type .fscrypt registered Aug 5 22:10:58.059222 kernel: Key type fscrypt-provisioning registered Aug 5 22:10:58.059232 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:10:58.059241 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:10:58.059254 kernel: ima: No architecture policies found Aug 5 22:10:58.059262 kernel: clk: Disabling unused clocks Aug 5 22:10:58.059272 kernel: Freeing unused kernel image (initmem) memory: 49328K Aug 5 22:10:58.059281 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:10:58.059289 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:10:58.059300 kernel: Run /init as init process Aug 5 22:10:58.059310 kernel: with arguments: Aug 5 22:10:58.059321 kernel: /init Aug 5 22:10:58.059331 kernel: with environment: Aug 5 22:10:58.059347 kernel: HOME=/ Aug 5 22:10:58.059360 kernel: TERM=linux Aug 5 22:10:58.059374 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:10:58.059391 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:10:58.059408 systemd[1]: Detected virtualization microsoft. Aug 5 22:10:58.059423 systemd[1]: Detected architecture x86-64. Aug 5 22:10:58.059438 systemd[1]: Running in initrd. Aug 5 22:10:58.059452 systemd[1]: No hostname configured, using default hostname. Aug 5 22:10:58.059469 systemd[1]: Hostname set to . Aug 5 22:10:58.059483 systemd[1]: Initializing machine ID from random generator. Aug 5 22:10:58.059497 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:10:58.059511 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:10:58.059526 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:10:58.059542 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:10:58.059557 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:10:58.059572 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:10:58.059590 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:10:58.059607 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:10:58.059621 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:10:58.059638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:10:58.059654 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:10:58.059669 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:10:58.063154 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:10:58.063180 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:10:58.063196 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:10:58.063211 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:10:58.063226 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:10:58.063242 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:10:58.063257 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:10:58.063272 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:10:58.063287 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:10:58.063305 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:10:58.063320 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:10:58.063335 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:10:58.063351 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:10:58.063366 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:10:58.063381 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:10:58.063396 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:10:58.063411 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:10:58.063426 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:58.063470 systemd-journald[176]: Collecting audit messages is disabled. Aug 5 22:10:58.063502 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:10:58.063518 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:10:58.063533 systemd-journald[176]: Journal started Aug 5 22:10:58.063570 systemd-journald[176]: Runtime Journal (/run/log/journal/52c48208ceeb4115a89b3037c3ecbe67) is 8.0M, max 158.8M, 150.8M free. Aug 5 22:10:58.068208 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:10:58.068806 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:10:58.077319 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:10:58.085832 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:10:58.085907 systemd-modules-load[177]: Inserted module 'overlay' Aug 5 22:10:58.093487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:58.116812 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:10:58.129069 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:10:58.132052 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:10:58.140031 kernel: Bridge firewalling registered Aug 5 22:10:58.133802 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 5 22:10:58.145792 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:10:58.150446 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:10:58.150635 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:10:58.154808 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:10:58.175275 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:10:58.183906 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:10:58.184845 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:10:58.191769 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:10:58.198839 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:10:58.223940 dracut-cmdline[214]: dracut-dracut-053 Aug 5 22:10:58.228055 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4a86c72568bc3f74d57effa5e252d5620941ef6d74241fc198859d020a6392c5 Aug 5 22:10:58.246011 systemd-resolved[205]: Positive Trust Anchors: Aug 5 22:10:58.246028 systemd-resolved[205]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:10:58.246081 systemd-resolved[205]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:10:58.252798 systemd-resolved[205]: Defaulting to hostname 'linux'. Aug 5 22:10:58.253709 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:10:58.267933 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:10:58.323697 kernel: SCSI subsystem initialized Aug 5 22:10:58.334695 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:10:58.347699 kernel: iscsi: registered transport (tcp) Aug 5 22:10:58.373015 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:10:58.373084 kernel: QLogic iSCSI HBA Driver Aug 5 22:10:58.407946 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:10:58.420007 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:10:58.452096 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:10:58.452168 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:10:58.455519 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:10:58.499705 kernel: raid6: avx512x4 gen() 18306 MB/s Aug 5 22:10:58.518694 kernel: raid6: avx512x2 gen() 18448 MB/s Aug 5 22:10:58.536695 kernel: raid6: avx512x1 gen() 18509 MB/s Aug 5 22:10:58.557695 kernel: raid6: avx2x4 gen() 18262 MB/s Aug 5 22:10:58.576687 kernel: raid6: avx2x2 gen() 18503 MB/s Aug 5 22:10:58.596517 kernel: raid6: avx2x1 gen() 13803 MB/s Aug 5 22:10:58.596564 kernel: raid6: using algorithm avx512x1 gen() 18509 MB/s Aug 5 22:10:58.617942 kernel: raid6: .... xor() 27109 MB/s, rmw enabled Aug 5 22:10:58.617974 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:10:58.643699 kernel: xor: automatically using best checksumming function avx Aug 5 22:10:58.808708 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:10:58.818455 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:10:58.825852 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:10:58.850798 systemd-udevd[396]: Using default interface naming scheme 'v255'. Aug 5 22:10:58.855144 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:10:58.867897 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:10:58.881215 dracut-pre-trigger[408]: rd.md=0: removing MD RAID activation Aug 5 22:10:58.907290 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:10:58.939099 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:10:58.981169 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:10:58.994872 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:10:59.026455 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:10:59.035172 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:10:59.041247 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:10:59.046641 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:10:59.057887 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:10:59.073210 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:10:59.077126 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:10:59.092048 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:10:59.092185 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:10:59.097500 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:10:59.103002 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:10:59.103312 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:59.105851 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:59.123840 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:10:59.123863 kernel: AES CTR mode by8 optimization enabled Aug 5 22:10:59.127343 kernel: hv_vmbus: Vmbus version:5.2 Aug 5 22:10:59.128652 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:59.142675 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:10:59.142839 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:59.173825 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 5 22:10:59.173859 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 5 22:10:59.169886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:10:59.196799 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 5 22:10:59.196839 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 5 22:10:59.197741 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:10:59.216737 kernel: hv_vmbus: registering driver hv_netvsc Aug 5 22:10:59.215824 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:10:59.228261 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:10:59.243957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:10:59.255689 kernel: hv_vmbus: registering driver hid_hyperv Aug 5 22:10:59.259862 kernel: PTP clock support registered Aug 5 22:10:59.259898 kernel: hv_vmbus: registering driver hv_storvsc Aug 5 22:10:59.275419 kernel: hv_utils: Registering HyperV Utility Driver Aug 5 22:10:59.275488 kernel: hv_vmbus: registering driver hv_utils Aug 5 22:10:59.279971 kernel: scsi host1: storvsc_host_t Aug 5 22:10:59.280154 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 5 22:10:59.280167 kernel: hv_utils: Heartbeat IC version 3.0 Aug 5 22:10:59.284475 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 5 22:10:59.284650 kernel: hv_utils: Shutdown IC version 3.2 Aug 5 22:10:59.289531 kernel: scsi host0: storvsc_host_t Aug 5 22:10:59.291569 kernel: hv_utils: TimeSync IC version 4.0 Aug 5 22:10:59.130668 systemd-resolved[205]: Clock change detected. Flushing caches. Aug 5 22:10:59.143427 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 5 22:10:59.143635 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 5 22:10:59.144841 systemd-journald[176]: Time jumped backwards, rotating. Aug 5 22:10:59.162524 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 5 22:10:59.163707 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:10:59.163736 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 5 22:10:59.177599 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 5 22:10:59.190922 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 5 22:10:59.191091 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 5 22:10:59.191237 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 5 22:10:59.191393 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 5 22:10:59.191570 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:10:59.191586 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 5 22:10:59.207817 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: VF slot 1 added Aug 5 22:10:59.218800 kernel: hv_vmbus: registering driver hv_pci Aug 5 22:10:59.222821 kernel: hv_pci e7b1e3f9-cce6-41f0-9d91-181a10196386: PCI VMBus probing: Using version 0x10004 Aug 5 22:10:59.261522 kernel: hv_pci e7b1e3f9-cce6-41f0-9d91-181a10196386: PCI host bridge to bus cce6:00 Aug 5 22:10:59.261981 kernel: pci_bus cce6:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 5 22:10:59.262173 kernel: pci_bus cce6:00: No busn resource found for root bus, will use [bus 00-ff] Aug 5 22:10:59.262333 kernel: pci cce6:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 5 22:10:59.262537 kernel: pci cce6:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:10:59.262714 kernel: pci cce6:00:02.0: enabling Extended Tags Aug 5 22:10:59.262908 kernel: pci cce6:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at cce6:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 5 22:10:59.263077 kernel: pci_bus cce6:00: busn_res: [bus 00-ff] end is updated to 00 Aug 5 22:10:59.263459 kernel: pci cce6:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:10:59.443802 kernel: mlx5_core cce6:00:02.0: enabling device (0000 -> 0002) Aug 5 22:10:59.684983 kernel: mlx5_core cce6:00:02.0: firmware version: 14.30.1284 Aug 5 22:10:59.685194 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (451) Aug 5 22:10:59.685213 kernel: BTRFS: device fsid d3844c60-0a2c-449a-9ee9-2a875f8d8e12 devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (459) Aug 5 22:10:59.685229 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: VF registering: eth1 Aug 5 22:10:59.685383 kernel: mlx5_core cce6:00:02.0 eth1: joined to eth0 Aug 5 22:10:59.685550 kernel: mlx5_core cce6:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 5 22:10:59.593171 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 5 22:10:59.672466 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 5 22:10:59.695822 kernel: mlx5_core cce6:00:02.0 enP52454s1: renamed from eth1 Aug 5 22:10:59.696133 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 22:10:59.709405 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 5 22:10:59.712542 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 5 22:10:59.730918 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:10:59.741826 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:10:59.748815 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:11:00.756614 disk-uuid[605]: The operation has completed successfully. Aug 5 22:11:00.761901 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:11:00.835352 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:11:00.835520 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:11:00.860917 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:11:00.866570 sh[691]: Success Aug 5 22:11:00.894818 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:11:01.054079 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:11:01.069926 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:11:01.074139 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:11:01.089795 kernel: BTRFS info (device dm-0): first mount of filesystem d3844c60-0a2c-449a-9ee9-2a875f8d8e12 Aug 5 22:11:01.089832 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:01.094626 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:11:01.097244 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:11:01.099446 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:11:01.302203 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:11:01.305435 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:11:01.317257 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:11:01.322977 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:11:01.341971 kernel: BTRFS info (device sda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:01.342017 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:01.342036 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:11:01.372807 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:11:01.386729 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:11:01.388829 kernel: BTRFS info (device sda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:01.396863 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:11:01.407947 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:11:01.423927 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:11:01.433979 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:11:01.455274 systemd-networkd[875]: lo: Link UP Aug 5 22:11:01.455283 systemd-networkd[875]: lo: Gained carrier Aug 5 22:11:01.460038 systemd-networkd[875]: Enumeration completed Aug 5 22:11:01.461217 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:11:01.461736 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:01.461740 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:11:01.466864 systemd[1]: Reached target network.target - Network. Aug 5 22:11:01.521806 kernel: mlx5_core cce6:00:02.0 enP52454s1: Link up Aug 5 22:11:01.559808 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: Data path switched to VF: enP52454s1 Aug 5 22:11:01.560498 systemd-networkd[875]: enP52454s1: Link UP Aug 5 22:11:01.560633 systemd-networkd[875]: eth0: Link UP Aug 5 22:11:01.560818 systemd-networkd[875]: eth0: Gained carrier Aug 5 22:11:01.560832 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:01.565768 systemd-networkd[875]: enP52454s1: Gained carrier Aug 5 22:11:01.590830 systemd-networkd[875]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 5 22:11:02.083772 ignition[856]: Ignition 2.18.0 Aug 5 22:11:02.083803 ignition[856]: Stage: fetch-offline Aug 5 22:11:02.085394 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:11:02.083865 ignition[856]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.083877 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.084057 ignition[856]: parsed url from cmdline: "" Aug 5 22:11:02.084062 ignition[856]: no config URL provided Aug 5 22:11:02.098896 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:11:02.084070 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:11:02.084081 ignition[856]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:11:02.084087 ignition[856]: failed to fetch config: resource requires networking Aug 5 22:11:02.084314 ignition[856]: Ignition finished successfully Aug 5 22:11:02.111222 ignition[887]: Ignition 2.18.0 Aug 5 22:11:02.111229 ignition[887]: Stage: fetch Aug 5 22:11:02.111397 ignition[887]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.111408 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.111512 ignition[887]: parsed url from cmdline: "" Aug 5 22:11:02.111515 ignition[887]: no config URL provided Aug 5 22:11:02.111520 ignition[887]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:11:02.111529 ignition[887]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:11:02.111558 ignition[887]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 5 22:11:02.196861 ignition[887]: GET result: OK Aug 5 22:11:02.197064 ignition[887]: config has been read from IMDS userdata Aug 5 22:11:02.197105 ignition[887]: parsing config with SHA512: 3ffebe6164877bbb8f014036ac2d78838f5aa167dff435e6b1319e7a25358c54a61fdd0aa40584f23bacdf4eb7eb8bcac045992154d421bce4d8b50e902d5287 Aug 5 22:11:02.202608 unknown[887]: fetched base config from "system" Aug 5 22:11:02.202629 unknown[887]: fetched base config from "system" Aug 5 22:11:02.203829 ignition[887]: fetch: fetch complete Aug 5 22:11:02.202638 unknown[887]: fetched user config from "azure" Aug 5 22:11:02.203839 ignition[887]: fetch: fetch passed Aug 5 22:11:02.205919 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:11:02.203909 ignition[887]: Ignition finished successfully Aug 5 22:11:02.219009 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:11:02.234752 ignition[895]: Ignition 2.18.0 Aug 5 22:11:02.234763 ignition[895]: Stage: kargs Aug 5 22:11:02.235016 ignition[895]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.238001 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:11:02.235029 ignition[895]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.235933 ignition[895]: kargs: kargs passed Aug 5 22:11:02.235988 ignition[895]: Ignition finished successfully Aug 5 22:11:02.248291 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:11:02.266405 ignition[902]: Ignition 2.18.0 Aug 5 22:11:02.266415 ignition[902]: Stage: disks Aug 5 22:11:02.268298 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:11:02.266614 ignition[902]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:02.272137 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:11:02.266629 ignition[902]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:02.276445 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:11:02.267491 ignition[902]: disks: disks passed Aug 5 22:11:02.279181 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:11:02.267530 ignition[902]: Ignition finished successfully Aug 5 22:11:02.283204 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:11:02.285465 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:11:02.300667 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:11:02.342202 systemd-fsck[911]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 5 22:11:02.347336 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:11:02.356930 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:11:02.461802 kernel: EXT4-fs (sda9): mounted filesystem e865ac73-053b-4efa-9a0f-50dec3f650d9 r/w with ordered data mode. Quota mode: none. Aug 5 22:11:02.461861 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:11:02.466103 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:11:02.497982 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:11:02.506077 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:11:02.517034 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (922) Aug 5 22:11:02.522816 kernel: BTRFS info (device sda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:02.522867 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:02.524736 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:11:02.530670 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:11:02.529154 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 5 22:11:02.534204 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:11:02.534245 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:11:02.538699 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:11:02.544045 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:11:02.553551 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:11:02.720008 systemd-networkd[875]: enP52454s1: Gained IPv6LL Aug 5 22:11:02.960454 coreos-metadata[924]: Aug 05 22:11:02.960 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 22:11:02.970879 coreos-metadata[924]: Aug 05 22:11:02.970 INFO Fetch successful Aug 5 22:11:02.973209 coreos-metadata[924]: Aug 05 22:11:02.970 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 5 22:11:02.984013 coreos-metadata[924]: Aug 05 22:11:02.983 INFO Fetch successful Aug 5 22:11:02.995846 coreos-metadata[924]: Aug 05 22:11:02.995 INFO wrote hostname ci-3975.2.0-a-9e76a2f9cc to /sysroot/etc/hostname Aug 5 22:11:02.997518 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:11:03.080220 initrd-setup-root[951]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:11:03.108939 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:11:03.114903 initrd-setup-root[965]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:11:03.119609 initrd-setup-root[972]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:11:03.615948 systemd-networkd[875]: eth0: Gained IPv6LL Aug 5 22:11:03.626813 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:11:03.634882 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:11:03.640956 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:11:03.648025 kernel: BTRFS info (device sda6): last unmount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:03.649433 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:11:03.682938 ignition[1041]: INFO : Ignition 2.18.0 Aug 5 22:11:03.682938 ignition[1041]: INFO : Stage: mount Aug 5 22:11:03.682938 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:03.682938 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:03.682938 ignition[1041]: INFO : mount: mount passed Aug 5 22:11:03.682938 ignition[1041]: INFO : Ignition finished successfully Aug 5 22:11:03.683807 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:11:03.687472 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:11:03.703372 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:11:03.710577 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:11:03.727733 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1053) Aug 5 22:11:03.727792 kernel: BTRFS info (device sda6): first mount of filesystem b6695624-d538-4f05-9ddd-23ee987404c1 Aug 5 22:11:03.730751 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:11:03.732927 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:11:03.737812 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:11:03.738947 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:11:03.761166 ignition[1070]: INFO : Ignition 2.18.0 Aug 5 22:11:03.761166 ignition[1070]: INFO : Stage: files Aug 5 22:11:03.765605 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:03.765605 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:03.765605 ignition[1070]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:11:03.765605 ignition[1070]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:11:03.765605 ignition[1070]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:11:03.824923 ignition[1070]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:11:03.829020 ignition[1070]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:11:03.829020 ignition[1070]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:11:03.825479 unknown[1070]: wrote ssh authorized keys file for user: core Aug 5 22:11:03.858700 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:11:03.863236 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:11:04.540606 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:11:04.680545 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:11:04.685587 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:11:04.685587 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:11:04.685587 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:11:04.697546 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:11:04.697546 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:11:04.705418 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:11:04.705418 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:11:04.713404 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:11:04.717558 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:11:04.721572 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:11:04.725536 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:04.731253 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:04.736774 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:04.736774 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:11:05.321750 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:11:06.318754 ignition[1070]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:11:06.318754 ignition[1070]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:11:06.345605 ignition[1070]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:11:06.350406 ignition[1070]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:11:06.350406 ignition[1070]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:11:06.357258 ignition[1070]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:11:06.357258 ignition[1070]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:11:06.363659 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:11:06.367608 ignition[1070]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:11:06.371454 ignition[1070]: INFO : files: files passed Aug 5 22:11:06.373194 ignition[1070]: INFO : Ignition finished successfully Aug 5 22:11:06.374003 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:11:06.383935 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:11:06.389844 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:11:06.395904 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:11:06.396543 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:11:06.413406 initrd-setup-root-after-ignition[1099]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:06.413406 initrd-setup-root-after-ignition[1099]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:06.420715 initrd-setup-root-after-ignition[1103]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:11:06.425001 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:11:06.430745 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:11:06.437917 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:11:06.460561 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:11:06.460665 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:11:06.466635 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:11:06.472232 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:11:06.479192 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:11:06.489334 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:11:06.500873 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:11:06.508927 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:11:06.518555 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:11:06.519621 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:11:06.520435 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:11:06.520772 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:11:06.520913 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:11:06.521594 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:11:06.522025 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:11:06.522375 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:11:06.522762 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:11:06.523765 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:11:06.524167 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:11:06.524545 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:11:06.524954 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:11:06.525339 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:11:06.525720 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:11:06.526077 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:11:06.526205 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:11:06.526957 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:11:06.527372 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:11:06.527721 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:11:06.561716 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:11:06.565413 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:11:06.565523 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:11:06.575931 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:11:06.582348 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:11:06.622319 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:11:06.622496 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:11:06.626681 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 5 22:11:06.626837 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:11:06.641960 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:11:06.654985 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:11:06.657150 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:11:06.657314 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:11:06.662012 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:11:06.662153 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:11:06.678928 ignition[1123]: INFO : Ignition 2.18.0 Aug 5 22:11:06.678928 ignition[1123]: INFO : Stage: umount Aug 5 22:11:06.678928 ignition[1123]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:11:06.678928 ignition[1123]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:11:06.677309 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:11:06.691432 ignition[1123]: INFO : umount: umount passed Aug 5 22:11:06.691432 ignition[1123]: INFO : Ignition finished successfully Aug 5 22:11:06.677429 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:11:06.691548 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:11:06.691639 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:11:06.697548 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:11:06.697607 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:11:06.702331 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:11:06.702380 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:11:06.706630 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:11:06.706679 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:11:06.709700 systemd[1]: Stopped target network.target - Network. Aug 5 22:11:06.710408 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:11:06.710453 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:11:06.710820 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:11:06.711144 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:11:06.736491 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:11:06.739234 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:11:06.741122 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:11:06.743149 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:11:06.743194 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:11:06.744220 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:11:06.744256 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:11:06.744499 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:11:06.744539 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:11:06.745246 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:11:06.745275 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:11:06.745844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:11:06.746141 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:11:06.747601 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:11:06.748105 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:11:06.748194 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:11:06.748922 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:11:06.749026 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:11:06.770729 systemd-networkd[875]: eth0: DHCPv6 lease lost Aug 5 22:11:06.774246 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:11:06.774420 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:11:06.780134 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:11:06.780208 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:11:06.809691 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:11:06.809875 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:11:06.814699 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:11:06.814977 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:11:06.828639 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:11:06.830754 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:11:06.833000 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:11:06.837845 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:11:06.840212 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:11:06.844751 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:11:06.844813 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:11:06.853638 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:11:06.872351 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:11:06.872518 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:11:06.877285 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:11:06.877326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:11:06.898430 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:11:06.898470 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:11:06.905236 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:11:06.907433 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:11:06.914431 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:11:06.914473 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:11:06.922371 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:11:06.929860 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: Data path switched from VF: enP52454s1 Aug 5 22:11:06.922432 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:11:06.937943 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:11:06.941829 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:11:06.941882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:11:06.945409 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 5 22:11:06.945453 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:11:06.950532 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:11:06.950583 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:11:06.956100 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:11:06.956152 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:06.964693 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:11:06.964810 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:11:06.972006 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:11:06.972099 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:11:06.977241 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:11:06.991330 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:11:07.012626 systemd[1]: Switching root. Aug 5 22:11:07.086280 systemd-journald[176]: Journal stopped Aug 5 22:11:12.006251 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Aug 5 22:11:12.006298 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:11:12.006316 kernel: SELinux: policy capability open_perms=1 Aug 5 22:11:12.006330 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:11:12.006344 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:11:12.006358 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:11:12.006374 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:11:12.006392 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:11:12.006406 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:11:12.006421 kernel: audit: type=1403 audit(1722895869.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:11:12.006437 systemd[1]: Successfully loaded SELinux policy in 121.587ms. Aug 5 22:11:12.006453 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.893ms. Aug 5 22:11:12.006470 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:11:12.006487 systemd[1]: Detected virtualization microsoft. Aug 5 22:11:12.006507 systemd[1]: Detected architecture x86-64. Aug 5 22:11:12.006523 systemd[1]: Detected first boot. Aug 5 22:11:12.006540 systemd[1]: Hostname set to . Aug 5 22:11:12.006555 systemd[1]: Initializing machine ID from random generator. Aug 5 22:11:12.006572 zram_generator::config[1165]: No configuration found. Aug 5 22:11:12.006593 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:11:12.006609 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:11:12.006625 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:11:12.006642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:11:12.006659 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:11:12.006676 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:11:12.006693 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:11:12.006715 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:11:12.006732 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:11:12.006749 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:11:12.006767 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:11:12.006803 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:11:12.006817 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:11:12.006827 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:11:12.006836 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:11:12.006850 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:11:12.006860 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:11:12.006870 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:11:12.006880 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:11:12.006890 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:11:12.006905 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:11:12.006919 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:11:12.006928 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:11:12.006941 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:11:12.006952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:11:12.006962 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:11:12.006973 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:11:12.006982 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:11:12.006993 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:11:12.007005 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:11:12.007017 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:11:12.007030 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:11:12.007041 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:11:12.007051 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:11:12.007064 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:11:12.007080 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:11:12.007093 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:11:12.007108 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:11:12.007122 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:11:12.007136 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:11:12.007151 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:11:12.007168 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:11:12.007183 systemd[1]: Reached target machines.target - Containers. Aug 5 22:11:12.007202 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:11:12.007218 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:11:12.007237 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:11:12.007255 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:11:12.007274 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:11:12.007292 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:11:12.007309 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:11:12.007326 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:11:12.007342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:11:12.007361 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:11:12.007375 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:11:12.007388 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:11:12.007398 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:11:12.007411 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:11:12.007422 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:11:12.007435 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:11:12.007447 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:11:12.007462 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:11:12.007474 kernel: loop: module loaded Aug 5 22:11:12.007485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:11:12.007497 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:11:12.007509 systemd[1]: Stopped verity-setup.service. Aug 5 22:11:12.007520 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:11:12.007557 systemd-journald[1256]: Collecting audit messages is disabled. Aug 5 22:11:12.007589 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:11:12.007600 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:11:12.007613 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:11:12.007624 kernel: ACPI: bus type drm_connector registered Aug 5 22:11:12.007635 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:11:12.007647 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:11:12.007661 systemd-journald[1256]: Journal started Aug 5 22:11:12.007685 systemd-journald[1256]: Runtime Journal (/run/log/journal/7e028b1e79404b4fa648670a38e1edc5) is 8.0M, max 158.8M, 150.8M free. Aug 5 22:11:11.386450 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:11:11.482157 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 5 22:11:11.482521 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:11:12.017196 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:11:12.019354 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:11:12.021796 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:11:12.024913 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:11:12.025074 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:11:12.027988 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:11:12.028226 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:11:12.031216 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:11:12.031372 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:11:12.034107 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:11:12.034257 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:11:12.037281 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:11:12.037434 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:11:12.040091 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:11:12.043356 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:11:12.050094 kernel: fuse: init (API version 7.39) Aug 5 22:11:12.050935 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:11:12.051102 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:11:12.059345 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:11:12.066922 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:11:12.074262 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:11:12.077190 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:11:12.077319 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:11:12.080733 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:11:12.087931 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:11:12.091461 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:11:12.094068 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:11:12.146967 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:11:12.157927 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:11:12.160665 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:11:12.166941 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:11:12.169512 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:11:12.174927 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:11:12.181935 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:11:12.188009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:11:12.195871 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:11:12.199196 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:11:12.203138 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:11:12.206857 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:11:12.210290 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:11:12.218059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:11:12.225675 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:11:12.232953 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:11:12.238904 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:11:12.242955 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:11:12.269982 udevadm[1309]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:11:12.290386 systemd-journald[1256]: Time spent on flushing to /var/log/journal/7e028b1e79404b4fa648670a38e1edc5 is 75.153ms for 967 entries. Aug 5 22:11:12.290386 systemd-journald[1256]: System Journal (/var/log/journal/7e028b1e79404b4fa648670a38e1edc5) is 11.9M, max 2.6G, 2.6G free. Aug 5 22:11:12.440963 systemd-journald[1256]: Received client request to flush runtime journal. Aug 5 22:11:12.441021 kernel: loop0: detected capacity change from 0 to 209816 Aug 5 22:11:12.441043 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:11:12.441129 systemd-journald[1256]: /var/log/journal/7e028b1e79404b4fa648670a38e1edc5/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Aug 5 22:11:12.441209 systemd-journald[1256]: Rotating system journal. Aug 5 22:11:12.441242 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:11:12.441263 kernel: loop1: detected capacity change from 0 to 56904 Aug 5 22:11:12.323135 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:11:12.336882 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:11:12.337677 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:11:12.354416 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Aug 5 22:11:12.354437 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Aug 5 22:11:12.367570 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:11:12.378987 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:11:12.443023 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:11:12.652526 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:11:12.661025 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:11:12.683657 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Aug 5 22:11:12.683683 systemd-tmpfiles[1325]: ACLs are not supported, ignoring. Aug 5 22:11:12.689474 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:11:12.693801 kernel: loop2: detected capacity change from 0 to 80568 Aug 5 22:11:12.931809 kernel: loop3: detected capacity change from 0 to 139904 Aug 5 22:11:13.146813 kernel: loop4: detected capacity change from 0 to 209816 Aug 5 22:11:13.154806 kernel: loop5: detected capacity change from 0 to 56904 Aug 5 22:11:13.161805 kernel: loop6: detected capacity change from 0 to 80568 Aug 5 22:11:13.174873 kernel: loop7: detected capacity change from 0 to 139904 Aug 5 22:11:13.185529 (sd-merge)[1330]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 5 22:11:13.186092 (sd-merge)[1330]: Merged extensions into '/usr'. Aug 5 22:11:13.191048 systemd[1]: Reloading requested from client PID 1297 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:11:13.191063 systemd[1]: Reloading... Aug 5 22:11:13.267872 zram_generator::config[1351]: No configuration found. Aug 5 22:11:13.480119 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:11:13.540401 systemd[1]: Reloading finished in 348 ms. Aug 5 22:11:13.570206 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:11:13.573929 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:11:13.587204 systemd[1]: Starting ensure-sysext.service... Aug 5 22:11:13.590295 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:11:13.595960 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:11:13.622086 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:11:13.622588 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:11:13.623770 systemd-tmpfiles[1414]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:11:13.624233 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Aug 5 22:11:13.624329 systemd-tmpfiles[1414]: ACLs are not supported, ignoring. Aug 5 22:11:13.633451 systemd-udevd[1415]: Using default interface naming scheme 'v255'. Aug 5 22:11:13.637824 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:11:13.637836 systemd-tmpfiles[1414]: Skipping /boot Aug 5 22:11:13.648445 systemd-tmpfiles[1414]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:11:13.648457 systemd-tmpfiles[1414]: Skipping /boot Aug 5 22:11:13.655834 systemd[1]: Reloading requested from client PID 1413 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:11:13.655856 systemd[1]: Reloading... Aug 5 22:11:13.754433 zram_generator::config[1450]: No configuration found. Aug 5 22:11:13.861808 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1465) Aug 5 22:11:13.949021 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:11:13.996828 kernel: hv_vmbus: registering driver hv_balloon Aug 5 22:11:14.008806 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 5 22:11:14.020806 kernel: hv_vmbus: registering driver hyperv_fb Aug 5 22:11:14.020848 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 5 22:11:14.028857 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 5 22:11:14.036914 kernel: Console: switching to colour dummy device 80x25 Aug 5 22:11:14.047809 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:11:14.189232 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:11:14.313594 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:11:14.313929 systemd[1]: Reloading finished in 657 ms. Aug 5 22:11:14.333486 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:11:14.340456 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:11:14.356801 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1465) Aug 5 22:11:14.380095 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 5 22:11:14.425155 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:11:14.436277 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:11:14.442271 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:11:14.455272 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:11:14.492245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:11:14.496451 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:11:14.502215 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:11:14.524268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:11:14.524726 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:11:14.531317 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:11:14.540329 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:11:14.548356 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:11:14.553126 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:11:14.555059 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:11:14.571508 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:11:14.576918 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:11:14.577532 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:14.585260 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:11:14.585734 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:11:14.592320 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:11:14.592500 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:11:14.607529 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:11:14.646956 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:11:14.647638 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:11:14.668751 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 22:11:14.676255 systemd[1]: Finished ensure-sysext.service. Aug 5 22:11:14.681508 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:11:14.681774 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:11:14.686428 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:11:14.692160 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:11:14.709030 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:11:14.715468 augenrules[1610]: No rules Aug 5 22:11:14.717011 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:11:14.719755 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:11:14.723112 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:11:14.726564 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:11:14.733934 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:11:14.750352 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:11:14.753968 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:11:14.756047 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:11:14.759827 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:11:14.759999 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:11:14.763581 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:11:14.763897 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:11:14.766971 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:11:14.767149 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:11:14.771048 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:11:14.771235 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:11:14.774381 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:11:14.791197 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:11:14.791365 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:11:14.796236 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:11:14.807990 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:11:14.814619 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:11:14.856472 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:11:14.862426 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:11:14.865818 lvm[1630]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:11:14.910988 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:11:14.914430 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:11:14.932181 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:11:14.935351 systemd-resolved[1580]: Positive Trust Anchors: Aug 5 22:11:14.935366 systemd-resolved[1580]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:11:14.935552 systemd-resolved[1580]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:11:14.943215 systemd-resolved[1580]: Using system hostname 'ci-3975.2.0-a-9e76a2f9cc'. Aug 5 22:11:14.945483 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:11:14.950890 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:11:14.956194 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:11:14.965364 systemd-networkd[1574]: lo: Link UP Aug 5 22:11:14.965373 systemd-networkd[1574]: lo: Gained carrier Aug 5 22:11:14.968525 systemd-networkd[1574]: Enumeration completed Aug 5 22:11:14.968618 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:11:14.969663 systemd[1]: Reached target network.target - Network. Aug 5 22:11:14.973371 systemd-networkd[1574]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:14.973463 systemd-networkd[1574]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:11:14.974129 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:11:14.980994 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:11:15.032807 kernel: mlx5_core cce6:00:02.0 enP52454s1: Link up Aug 5 22:11:15.051898 kernel: hv_netvsc 000d3ab9-1ce7-000d-3ab9-1ce7000d3ab9 eth0: Data path switched to VF: enP52454s1 Aug 5 22:11:15.052693 systemd-networkd[1574]: enP52454s1: Link UP Aug 5 22:11:15.052907 systemd-networkd[1574]: eth0: Link UP Aug 5 22:11:15.052914 systemd-networkd[1574]: eth0: Gained carrier Aug 5 22:11:15.052942 systemd-networkd[1574]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:15.057520 systemd-networkd[1574]: enP52454s1: Gained carrier Aug 5 22:11:15.090830 systemd-networkd[1574]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 5 22:11:15.137132 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:11:16.745190 ldconfig[1293]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:11:16.761889 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:11:16.768970 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:11:16.788026 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:11:16.791205 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:11:16.794035 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:11:16.797418 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:11:16.800588 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:11:16.803034 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:11:16.805835 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:11:16.808584 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:11:16.808628 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:11:16.810615 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:11:16.813517 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:11:16.817303 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:11:16.826221 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:11:16.829205 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:11:16.831666 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:11:16.833815 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:11:16.835988 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:11:16.836018 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:11:16.841877 systemd[1]: Starting chronyd.service - NTP client/server... Aug 5 22:11:16.846948 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:11:16.854050 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:11:16.864991 systemd-networkd[1574]: eth0: Gained IPv6LL Aug 5 22:11:16.865594 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:11:16.870583 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:11:16.875953 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:11:16.878890 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:11:16.888936 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:11:16.895490 jq[1655]: false Aug 5 22:11:16.895894 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:11:16.900961 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:11:16.911100 (chronyd)[1651]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 5 22:11:16.912988 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:11:16.927929 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:11:16.930837 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:11:16.931301 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:11:16.931965 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:11:16.941898 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:11:16.945654 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:11:16.950716 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:11:16.951264 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:11:16.954102 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:11:16.954310 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:11:16.966480 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:11:16.969093 jq[1670]: true Aug 5 22:11:16.969583 chronyd[1680]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 5 22:11:16.975461 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:11:16.981270 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:11:16.981613 extend-filesystems[1656]: Found loop4 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found loop5 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found loop6 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found loop7 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda1 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda2 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda3 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found usr Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda4 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda6 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda7 Aug 5 22:11:16.986950 extend-filesystems[1656]: Found sda9 Aug 5 22:11:16.986950 extend-filesystems[1656]: Checking size of /dev/sda9 Aug 5 22:11:17.041607 chronyd[1680]: Timezone right/UTC failed leap second check, ignoring Aug 5 22:11:17.041899 chronyd[1680]: Loaded seccomp filter (level 2) Aug 5 22:11:17.043167 systemd[1]: Started chronyd.service - NTP client/server. Aug 5 22:11:17.045732 (ntainerd)[1689]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:11:17.047461 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:11:17.047831 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:11:17.057879 systemd-networkd[1574]: enP52454s1: Gained IPv6LL Aug 5 22:11:17.069985 tar[1673]: linux-amd64/helm Aug 5 22:11:17.088248 update_engine[1669]: I0805 22:11:17.088176 1669 main.cc:92] Flatcar Update Engine starting Aug 5 22:11:17.091803 jq[1687]: true Aug 5 22:11:17.101008 extend-filesystems[1656]: Old size kept for /dev/sda9 Aug 5 22:11:17.101008 extend-filesystems[1656]: Found sr0 Aug 5 22:11:17.107957 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:11:17.108185 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:11:17.155264 dbus-daemon[1654]: [system] SELinux support is enabled Aug 5 22:11:17.155449 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:11:17.165062 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:11:17.165101 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:11:17.170217 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:11:17.170247 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:11:17.175136 update_engine[1669]: I0805 22:11:17.174939 1669 update_check_scheduler.cc:74] Next update check in 9m0s Aug 5 22:11:17.180047 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:11:17.193824 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:11:17.207768 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:11:17.259047 systemd-logind[1663]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:11:17.262051 systemd-logind[1663]: New seat seat0. Aug 5 22:11:17.264272 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:11:17.277318 bash[1726]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:11:17.281223 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:11:17.286547 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:11:17.287288 sshd_keygen[1674]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:11:17.333894 coreos-metadata[1653]: Aug 05 22:11:17.328 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 22:11:17.333894 coreos-metadata[1653]: Aug 05 22:11:17.331 INFO Fetch successful Aug 5 22:11:17.333894 coreos-metadata[1653]: Aug 05 22:11:17.332 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 5 22:11:17.339174 coreos-metadata[1653]: Aug 05 22:11:17.337 INFO Fetch successful Aug 5 22:11:17.339174 coreos-metadata[1653]: Aug 05 22:11:17.338 INFO Fetching http://168.63.129.16/machine/95550d5c-0d3b-4f38-becf-df9194e1230b/cf0cadab%2D2f0b%2D4ffc%2Dba9a%2Ddef93b1879dc.%5Fci%2D3975.2.0%2Da%2D9e76a2f9cc?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 5 22:11:17.341775 coreos-metadata[1653]: Aug 05 22:11:17.341 INFO Fetch successful Aug 5 22:11:17.343769 coreos-metadata[1653]: Aug 05 22:11:17.342 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 5 22:11:17.355485 coreos-metadata[1653]: Aug 05 22:11:17.355 INFO Fetch successful Aug 5 22:11:17.360998 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1465) Aug 5 22:11:17.436944 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:11:17.484746 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:11:17.526924 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:11:17.532089 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:11:17.542966 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 5 22:11:17.595136 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:11:17.595348 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:11:17.608096 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:11:17.622109 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 5 22:11:17.630331 locksmithd[1724]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:11:17.649624 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:11:17.660670 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:11:17.672177 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:11:17.675335 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:11:18.000323 containerd[1689]: time="2024-08-05T22:11:18.000065000Z" level=info msg="starting containerd" revision=1fbfc07f8d28210e62bdbcbf7b950bac8028afbf version=v1.7.17 Aug 5 22:11:18.072818 tar[1673]: linux-amd64/LICENSE Aug 5 22:11:18.072818 tar[1673]: linux-amd64/README.md Aug 5 22:11:18.089052 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:11:18.097332 containerd[1689]: time="2024-08-05T22:11:18.097284200Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:11:18.097609 containerd[1689]: time="2024-08-05T22:11:18.097440900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.098978600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099016700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099247900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099269400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099360500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099420300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099437400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099523200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099730500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099753900Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:11:18.100218 containerd[1689]: time="2024-08-05T22:11:18.099768900Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100578 containerd[1689]: time="2024-08-05T22:11:18.099963500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:11:18.100578 containerd[1689]: time="2024-08-05T22:11:18.099986900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:11:18.100578 containerd[1689]: time="2024-08-05T22:11:18.100056100Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:11:18.100578 containerd[1689]: time="2024-08-05T22:11:18.100072300Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:11:18.116292 containerd[1689]: time="2024-08-05T22:11:18.116259400Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:11:18.116358 containerd[1689]: time="2024-08-05T22:11:18.116302200Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:11:18.116358 containerd[1689]: time="2024-08-05T22:11:18.116321400Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:11:18.116425 containerd[1689]: time="2024-08-05T22:11:18.116381600Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:11:18.116425 containerd[1689]: time="2024-08-05T22:11:18.116406900Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:11:18.116425 containerd[1689]: time="2024-08-05T22:11:18.116421700Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:11:18.116531 containerd[1689]: time="2024-08-05T22:11:18.116437900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:11:18.116638 containerd[1689]: time="2024-08-05T22:11:18.116565900Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:11:18.116638 containerd[1689]: time="2024-08-05T22:11:18.116607200Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:11:18.116638 containerd[1689]: time="2024-08-05T22:11:18.116627500Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116648400Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116669300Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116694000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116712900Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116732700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116769600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116804100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116824500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116841600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.116979400Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.117287200Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.117322400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.117342500Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:11:18.117990 containerd[1689]: time="2024-08-05T22:11:18.117374900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117443500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117461500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117478100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117494200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117511700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117528100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117544100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117562900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117580700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117717000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117737200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117754900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117773600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117807800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118427 containerd[1689]: time="2024-08-05T22:11:18.117827700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118994 containerd[1689]: time="2024-08-05T22:11:18.117844300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.118994 containerd[1689]: time="2024-08-05T22:11:18.117859200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:11:18.119068 containerd[1689]: time="2024-08-05T22:11:18.118206200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:11:18.119068 containerd[1689]: time="2024-08-05T22:11:18.118284200Z" level=info msg="Connect containerd service" Aug 5 22:11:18.119068 containerd[1689]: time="2024-08-05T22:11:18.118328200Z" level=info msg="using legacy CRI server" Aug 5 22:11:18.119068 containerd[1689]: time="2024-08-05T22:11:18.118337900Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:11:18.119068 containerd[1689]: time="2024-08-05T22:11:18.118478900Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.120794600Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.120851500Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.120883400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.120904400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.120927900Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.121669700Z" level=info msg="Start subscribing containerd event" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.121724500Z" level=info msg="Start recovering state" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.121940200Z" level=info msg="Start event monitor" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.122147000Z" level=info msg="Start snapshots syncer" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.122160900Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.122171000Z" level=info msg="Start streaming server" Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.122086000Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:11:18.122494 containerd[1689]: time="2024-08-05T22:11:18.122295600Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:11:18.122824 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:11:18.128125 containerd[1689]: time="2024-08-05T22:11:18.126880200Z" level=info msg="containerd successfully booted in 0.129468s" Aug 5 22:11:18.378954 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:11:18.380273 (kubelet)[1804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:11:18.387503 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:11:18.393842 systemd[1]: Startup finished in 633ms (firmware) + 19.456s (loader) + 923ms (kernel) + 12.216s (initrd) + 8.649s (userspace) = 41.878s. Aug 5 22:11:18.588212 login[1786]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 22:11:18.591821 login[1787]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 22:11:18.608302 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:11:18.616184 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:11:18.623848 systemd-logind[1663]: New session 1 of user core. Aug 5 22:11:18.637996 systemd-logind[1663]: New session 2 of user core. Aug 5 22:11:18.646396 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:11:18.652164 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:11:18.670466 (systemd)[1815]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:11:18.879405 systemd[1815]: Queued start job for default target default.target. Aug 5 22:11:18.885195 systemd[1815]: Created slice app.slice - User Application Slice. Aug 5 22:11:18.885229 systemd[1815]: Reached target paths.target - Paths. Aug 5 22:11:18.885247 systemd[1815]: Reached target timers.target - Timers. Aug 5 22:11:18.888970 systemd[1815]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:11:18.904387 systemd[1815]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:11:18.904527 systemd[1815]: Reached target sockets.target - Sockets. Aug 5 22:11:18.904556 systemd[1815]: Reached target basic.target - Basic System. Aug 5 22:11:18.904599 systemd[1815]: Reached target default.target - Main User Target. Aug 5 22:11:18.904637 systemd[1815]: Startup finished in 222ms. Aug 5 22:11:18.904761 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:11:18.909284 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:11:18.911487 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:11:19.017926 waagent[1784]: 2024-08-05T22:11:19.017820Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.019277Z INFO Daemon Daemon OS: flatcar 3975.2.0 Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.019775Z INFO Daemon Daemon Python: 3.11.9 Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.020420Z INFO Daemon Daemon Run daemon Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.021236Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3975.2.0' Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.021597Z INFO Daemon Daemon Using waagent for provisioning Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.022826Z INFO Daemon Daemon Activate resource disk Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.023160Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.027158Z INFO Daemon Daemon Found device: None Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.027904Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.028670Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.030952Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 5 22:11:19.055037 waagent[1784]: 2024-08-05T22:11:19.031477Z INFO Daemon Daemon Running default provisioning handler Aug 5 22:11:19.060466 waagent[1784]: 2024-08-05T22:11:19.060393Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 5 22:11:19.072855 waagent[1784]: 2024-08-05T22:11:19.061904Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 5 22:11:19.072855 waagent[1784]: 2024-08-05T22:11:19.062575Z INFO Daemon Daemon cloud-init is enabled: False Aug 5 22:11:19.072855 waagent[1784]: 2024-08-05T22:11:19.063301Z INFO Daemon Daemon Copying ovf-env.xml Aug 5 22:11:19.157507 waagent[1784]: 2024-08-05T22:11:19.156866Z INFO Daemon Daemon Successfully mounted dvd Aug 5 22:11:19.178346 waagent[1784]: 2024-08-05T22:11:19.176002Z INFO Daemon Daemon Detect protocol endpoint Aug 5 22:11:19.177089 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 5 22:11:19.179165 waagent[1784]: 2024-08-05T22:11:19.178950Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 5 22:11:19.182192 waagent[1784]: 2024-08-05T22:11:19.182121Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 5 22:11:19.185760 waagent[1784]: 2024-08-05T22:11:19.185375Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 5 22:11:19.188410 waagent[1784]: 2024-08-05T22:11:19.188015Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 5 22:11:19.190343 waagent[1784]: 2024-08-05T22:11:19.190290Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 5 22:11:19.204395 waagent[1784]: 2024-08-05T22:11:19.204265Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 5 22:11:19.207939 waagent[1784]: 2024-08-05T22:11:19.207299Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 5 22:11:19.209635 waagent[1784]: 2024-08-05T22:11:19.209474Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 5 22:11:19.266162 kubelet[1804]: E0805 22:11:19.266098 1804 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:11:19.268864 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:11:19.269148 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:11:19.299094 waagent[1784]: 2024-08-05T22:11:19.299019Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 5 22:11:19.304275 waagent[1784]: 2024-08-05T22:11:19.300193Z INFO Daemon Daemon Forcing an update of the goal state. Aug 5 22:11:19.304541 waagent[1784]: 2024-08-05T22:11:19.304491Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 5 22:11:19.320673 waagent[1784]: 2024-08-05T22:11:19.320617Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.152 Aug 5 22:11:19.334975 waagent[1784]: 2024-08-05T22:11:19.322128Z INFO Daemon Aug 5 22:11:19.334975 waagent[1784]: 2024-08-05T22:11:19.323728Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 597aa6c5-edab-48f1-a65d-0fc8ce2cf93c eTag: 5904927306394689069 source: Fabric] Aug 5 22:11:19.334975 waagent[1784]: 2024-08-05T22:11:19.325126Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 5 22:11:19.334975 waagent[1784]: 2024-08-05T22:11:19.326177Z INFO Daemon Aug 5 22:11:19.334975 waagent[1784]: 2024-08-05T22:11:19.326884Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 5 22:11:19.334975 waagent[1784]: 2024-08-05T22:11:19.331244Z INFO Daemon Daemon Downloading artifacts profile blob Aug 5 22:11:19.408994 waagent[1784]: 2024-08-05T22:11:19.408877Z INFO Daemon Downloaded certificate {'thumbprint': 'AC68027818EA265ECA6C6C2517CE3C40DF0E16A6', 'hasPrivateKey': True} Aug 5 22:11:19.413671 waagent[1784]: 2024-08-05T22:11:19.413616Z INFO Daemon Downloaded certificate {'thumbprint': '84FF8807E1857A3A194D137087256C3CDE26D7CA', 'hasPrivateKey': False} Aug 5 22:11:19.419289 waagent[1784]: 2024-08-05T22:11:19.414906Z INFO Daemon Fetch goal state completed Aug 5 22:11:19.423514 waagent[1784]: 2024-08-05T22:11:19.423470Z INFO Daemon Daemon Starting provisioning Aug 5 22:11:19.429618 waagent[1784]: 2024-08-05T22:11:19.424505Z INFO Daemon Daemon Handle ovf-env.xml. Aug 5 22:11:19.429618 waagent[1784]: 2024-08-05T22:11:19.425355Z INFO Daemon Daemon Set hostname [ci-3975.2.0-a-9e76a2f9cc] Aug 5 22:11:19.441151 waagent[1784]: 2024-08-05T22:11:19.441095Z INFO Daemon Daemon Publish hostname [ci-3975.2.0-a-9e76a2f9cc] Aug 5 22:11:19.447933 waagent[1784]: 2024-08-05T22:11:19.442336Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 5 22:11:19.447933 waagent[1784]: 2024-08-05T22:11:19.443104Z INFO Daemon Daemon Primary interface is [eth0] Aug 5 22:11:19.464909 systemd-networkd[1574]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:11:19.464918 systemd-networkd[1574]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:11:19.464968 systemd-networkd[1574]: eth0: DHCP lease lost Aug 5 22:11:19.466201 waagent[1784]: 2024-08-05T22:11:19.466130Z INFO Daemon Daemon Create user account if not exists Aug 5 22:11:19.468723 systemd-networkd[1574]: eth0: DHCPv6 lease lost Aug 5 22:11:19.470864 waagent[1784]: 2024-08-05T22:11:19.468718Z INFO Daemon Daemon User core already exists, skip useradd Aug 5 22:11:19.470864 waagent[1784]: 2024-08-05T22:11:19.469764Z INFO Daemon Daemon Configure sudoer Aug 5 22:11:19.471065 waagent[1784]: 2024-08-05T22:11:19.471019Z INFO Daemon Daemon Configure sshd Aug 5 22:11:19.472051 waagent[1784]: 2024-08-05T22:11:19.472007Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 5 22:11:19.472633 waagent[1784]: 2024-08-05T22:11:19.472599Z INFO Daemon Daemon Deploy ssh public key. Aug 5 22:11:19.529887 systemd-networkd[1574]: eth0: DHCPv4 address 10.200.8.39/24, gateway 10.200.8.1 acquired from 168.63.129.16 Aug 5 22:11:20.853335 waagent[1784]: 2024-08-05T22:11:20.853256Z INFO Daemon Daemon Provisioning complete Aug 5 22:11:20.867858 waagent[1784]: 2024-08-05T22:11:20.867801Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 5 22:11:20.870517 waagent[1784]: 2024-08-05T22:11:20.870462Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 5 22:11:20.874601 waagent[1784]: 2024-08-05T22:11:20.874545Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 5 22:11:20.997950 waagent[1866]: 2024-08-05T22:11:20.997860Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 5 22:11:20.998349 waagent[1866]: 2024-08-05T22:11:20.998009Z INFO ExtHandler ExtHandler OS: flatcar 3975.2.0 Aug 5 22:11:20.998349 waagent[1866]: 2024-08-05T22:11:20.998090Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 5 22:11:21.006929 waagent[1866]: 2024-08-05T22:11:21.006868Z INFO ExtHandler ExtHandler Distro: flatcar-3975.2.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 5 22:11:21.007106 waagent[1866]: 2024-08-05T22:11:21.007064Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 22:11:21.007198 waagent[1866]: 2024-08-05T22:11:21.007154Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 22:11:21.014301 waagent[1866]: 2024-08-05T22:11:21.014241Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 5 22:11:21.019971 waagent[1866]: 2024-08-05T22:11:21.019925Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.152 Aug 5 22:11:21.020384 waagent[1866]: 2024-08-05T22:11:21.020331Z INFO ExtHandler Aug 5 22:11:21.020465 waagent[1866]: 2024-08-05T22:11:21.020419Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: f2a5c76f-e323-4be8-b1f1-b607cf19d95f eTag: 5904927306394689069 source: Fabric] Aug 5 22:11:21.020772 waagent[1866]: 2024-08-05T22:11:21.020721Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 5 22:11:21.021335 waagent[1866]: 2024-08-05T22:11:21.021280Z INFO ExtHandler Aug 5 22:11:21.021414 waagent[1866]: 2024-08-05T22:11:21.021361Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 5 22:11:21.024875 waagent[1866]: 2024-08-05T22:11:21.024834Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 5 22:11:21.101507 waagent[1866]: 2024-08-05T22:11:21.101429Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AC68027818EA265ECA6C6C2517CE3C40DF0E16A6', 'hasPrivateKey': True} Aug 5 22:11:21.101926 waagent[1866]: 2024-08-05T22:11:21.101871Z INFO ExtHandler Downloaded certificate {'thumbprint': '84FF8807E1857A3A194D137087256C3CDE26D7CA', 'hasPrivateKey': False} Aug 5 22:11:21.102347 waagent[1866]: 2024-08-05T22:11:21.102298Z INFO ExtHandler Fetch goal state completed Aug 5 22:11:21.117646 waagent[1866]: 2024-08-05T22:11:21.117550Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1866 Aug 5 22:11:21.117759 waagent[1866]: 2024-08-05T22:11:21.117716Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 5 22:11:21.119316 waagent[1866]: 2024-08-05T22:11:21.119262Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3975.2.0', '', 'Flatcar Container Linux by Kinvolk'] Aug 5 22:11:21.119680 waagent[1866]: 2024-08-05T22:11:21.119632Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 5 22:11:21.134240 waagent[1866]: 2024-08-05T22:11:21.134203Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 5 22:11:21.134402 waagent[1866]: 2024-08-05T22:11:21.134360Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 5 22:11:21.141019 waagent[1866]: 2024-08-05T22:11:21.140903Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 5 22:11:21.147249 systemd[1]: Reloading requested from client PID 1881 ('systemctl') (unit waagent.service)... Aug 5 22:11:21.147265 systemd[1]: Reloading... Aug 5 22:11:21.237847 zram_generator::config[1915]: No configuration found. Aug 5 22:11:21.352252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:11:21.435861 systemd[1]: Reloading finished in 288 ms. Aug 5 22:11:21.464805 waagent[1866]: 2024-08-05T22:11:21.463018Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 5 22:11:21.470891 systemd[1]: Reloading requested from client PID 1969 ('systemctl') (unit waagent.service)... Aug 5 22:11:21.470906 systemd[1]: Reloading... Aug 5 22:11:21.549857 zram_generator::config[2000]: No configuration found. Aug 5 22:11:21.670341 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:11:21.752616 systemd[1]: Reloading finished in 281 ms. Aug 5 22:11:21.777800 waagent[1866]: 2024-08-05T22:11:21.775236Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 5 22:11:21.777800 waagent[1866]: 2024-08-05T22:11:21.775440Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 5 22:11:22.069505 waagent[1866]: 2024-08-05T22:11:22.069359Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 5 22:11:22.070172 waagent[1866]: 2024-08-05T22:11:22.070108Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 5 22:11:22.070973 waagent[1866]: 2024-08-05T22:11:22.070898Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 5 22:11:22.071086 waagent[1866]: 2024-08-05T22:11:22.071036Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 22:11:22.071609 waagent[1866]: 2024-08-05T22:11:22.071492Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 5 22:11:22.071609 waagent[1866]: 2024-08-05T22:11:22.071551Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 22:11:22.071996 waagent[1866]: 2024-08-05T22:11:22.071944Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 5 22:11:22.072287 waagent[1866]: 2024-08-05T22:11:22.072202Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 22:11:22.072568 waagent[1866]: 2024-08-05T22:11:22.072508Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 5 22:11:22.072924 waagent[1866]: 2024-08-05T22:11:22.072876Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 5 22:11:22.073014 waagent[1866]: 2024-08-05T22:11:22.072935Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 22:11:22.073058 waagent[1866]: 2024-08-05T22:11:22.073003Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 5 22:11:22.073058 waagent[1866]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 5 22:11:22.073058 waagent[1866]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Aug 5 22:11:22.073058 waagent[1866]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 5 22:11:22.073058 waagent[1866]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 5 22:11:22.073058 waagent[1866]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 5 22:11:22.073058 waagent[1866]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 5 22:11:22.073861 waagent[1866]: 2024-08-05T22:11:22.073804Z INFO EnvHandler ExtHandler Configure routes Aug 5 22:11:22.073947 waagent[1866]: 2024-08-05T22:11:22.073908Z INFO EnvHandler ExtHandler Gateway:None Aug 5 22:11:22.074025 waagent[1866]: 2024-08-05T22:11:22.073990Z INFO EnvHandler ExtHandler Routes:None Aug 5 22:11:22.074415 waagent[1866]: 2024-08-05T22:11:22.074356Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 5 22:11:22.074476 waagent[1866]: 2024-08-05T22:11:22.074407Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 5 22:11:22.074684 waagent[1866]: 2024-08-05T22:11:22.074644Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 5 22:11:22.081084 waagent[1866]: 2024-08-05T22:11:22.081041Z INFO ExtHandler ExtHandler Aug 5 22:11:22.081172 waagent[1866]: 2024-08-05T22:11:22.081132Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: e4b4f407-8183-479b-80a6-1536ad871cfa correlation abae8c6d-d382-423a-aaf5-769f68aa23af created: 2024-08-05T22:10:26.588379Z] Aug 5 22:11:22.081518 waagent[1866]: 2024-08-05T22:11:22.081473Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 5 22:11:22.082077 waagent[1866]: 2024-08-05T22:11:22.082033Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Aug 5 22:11:22.122527 waagent[1866]: 2024-08-05T22:11:22.122473Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: EEBAFD20-AA07-474F-98E6-15750961179F;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 5 22:11:22.124865 waagent[1866]: 2024-08-05T22:11:22.124773Z INFO MonitorHandler ExtHandler Network interfaces: Aug 5 22:11:22.124865 waagent[1866]: Executing ['ip', '-a', '-o', 'link']: Aug 5 22:11:22.124865 waagent[1866]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 5 22:11:22.124865 waagent[1866]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:1c:e7 brd ff:ff:ff:ff:ff:ff Aug 5 22:11:22.124865 waagent[1866]: 3: enP52454s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:1c:e7 brd ff:ff:ff:ff:ff:ff\ altname enP52454p0s2 Aug 5 22:11:22.124865 waagent[1866]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 5 22:11:22.124865 waagent[1866]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 5 22:11:22.124865 waagent[1866]: 2: eth0 inet 10.200.8.39/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 5 22:11:22.124865 waagent[1866]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 5 22:11:22.124865 waagent[1866]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 5 22:11:22.124865 waagent[1866]: 2: eth0 inet6 fe80::20d:3aff:feb9:1ce7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 5 22:11:22.124865 waagent[1866]: 3: enP52454s1 inet6 fe80::20d:3aff:feb9:1ce7/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 5 22:11:22.187410 waagent[1866]: 2024-08-05T22:11:22.187344Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 5 22:11:22.187410 waagent[1866]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:11:22.187410 waagent[1866]: pkts bytes target prot opt in out source destination Aug 5 22:11:22.187410 waagent[1866]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:11:22.187410 waagent[1866]: pkts bytes target prot opt in out source destination Aug 5 22:11:22.187410 waagent[1866]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:11:22.187410 waagent[1866]: pkts bytes target prot opt in out source destination Aug 5 22:11:22.187410 waagent[1866]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 5 22:11:22.187410 waagent[1866]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 5 22:11:22.187410 waagent[1866]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 5 22:11:22.190658 waagent[1866]: 2024-08-05T22:11:22.190599Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 5 22:11:22.190658 waagent[1866]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:11:22.190658 waagent[1866]: pkts bytes target prot opt in out source destination Aug 5 22:11:22.190658 waagent[1866]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:11:22.190658 waagent[1866]: pkts bytes target prot opt in out source destination Aug 5 22:11:22.190658 waagent[1866]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:11:22.190658 waagent[1866]: pkts bytes target prot opt in out source destination Aug 5 22:11:22.190658 waagent[1866]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 5 22:11:22.190658 waagent[1866]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 5 22:11:22.190658 waagent[1866]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 5 22:11:22.191078 waagent[1866]: 2024-08-05T22:11:22.190927Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 5 22:11:29.520260 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:11:29.526024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:11:29.623646 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:11:29.628446 (kubelet)[2096]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:11:30.160177 kubelet[2096]: E0805 22:11:30.160115 2096 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:11:30.164390 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:11:30.164599 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:11:40.340736 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:11:40.347016 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:11:40.437065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:11:40.441417 (kubelet)[2111]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:11:40.845301 chronyd[1680]: Selected source PHC0 Aug 5 22:11:42.171121 kubelet[2111]: E0805 22:11:40.950967 2111 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:11:40.953569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:11:40.953778 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:11:51.090763 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:11:51.096010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:11:51.815851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:11:51.827096 (kubelet)[2127]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:11:52.158948 kubelet[2127]: E0805 22:11:52.158888 2127 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:11:52.161633 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:11:52.161860 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:02.149040 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 5 22:12:02.340727 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 5 22:12:02.350397 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:02.473548 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:02.478093 (kubelet)[2146]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:02.518933 kubelet[2146]: E0805 22:12:02.518883 2146 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:02.521379 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:02.521569 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:02.632121 update_engine[1669]: I0805 22:12:02.632053 1669 update_attempter.cc:509] Updating boot flags... Aug 5 22:12:03.393982 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:12:03.403207 systemd[1]: Started sshd@0-10.200.8.39:22-10.200.16.10:49468.service - OpenSSH per-connection server daemon (10.200.16.10:49468). Aug 5 22:12:05.253812 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2168) Aug 5 22:12:05.364858 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2172) Aug 5 22:12:05.502025 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2172) Aug 5 22:12:05.968412 sshd[2157]: Accepted publickey for core from 10.200.16.10 port 49468 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:05.970174 sshd[2157]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:05.974203 systemd-logind[1663]: New session 3 of user core. Aug 5 22:12:05.983933 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:12:06.566321 systemd[1]: Started sshd@1-10.200.8.39:22-10.200.16.10:49470.service - OpenSSH per-connection server daemon (10.200.16.10:49470). Aug 5 22:12:07.214282 sshd[2253]: Accepted publickey for core from 10.200.16.10 port 49470 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:07.215892 sshd[2253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:07.219967 systemd-logind[1663]: New session 4 of user core. Aug 5 22:12:07.227353 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:12:07.678492 sshd[2253]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:07.682894 systemd[1]: sshd@1-10.200.8.39:22-10.200.16.10:49470.service: Deactivated successfully. Aug 5 22:12:07.685018 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:12:07.685702 systemd-logind[1663]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:12:07.686649 systemd-logind[1663]: Removed session 4. Aug 5 22:12:07.791701 systemd[1]: Started sshd@2-10.200.8.39:22-10.200.16.10:49476.service - OpenSSH per-connection server daemon (10.200.16.10:49476). Aug 5 22:12:08.469241 sshd[2260]: Accepted publickey for core from 10.200.16.10 port 49476 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:08.471017 sshd[2260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:08.476637 systemd-logind[1663]: New session 5 of user core. Aug 5 22:12:08.486933 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:12:08.922673 sshd[2260]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:08.926140 systemd[1]: sshd@2-10.200.8.39:22-10.200.16.10:49476.service: Deactivated successfully. Aug 5 22:12:08.928429 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:12:08.929969 systemd-logind[1663]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:12:08.930967 systemd-logind[1663]: Removed session 5. Aug 5 22:12:09.044078 systemd[1]: Started sshd@3-10.200.8.39:22-10.200.16.10:37162.service - OpenSSH per-connection server daemon (10.200.16.10:37162). Aug 5 22:12:09.696420 sshd[2267]: Accepted publickey for core from 10.200.16.10 port 37162 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:09.698171 sshd[2267]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:09.703809 systemd-logind[1663]: New session 6 of user core. Aug 5 22:12:09.713942 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:12:10.160079 sshd[2267]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:10.163442 systemd[1]: sshd@3-10.200.8.39:22-10.200.16.10:37162.service: Deactivated successfully. Aug 5 22:12:10.165873 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:12:10.167426 systemd-logind[1663]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:12:10.168426 systemd-logind[1663]: Removed session 6. Aug 5 22:12:10.273675 systemd[1]: Started sshd@4-10.200.8.39:22-10.200.16.10:37174.service - OpenSSH per-connection server daemon (10.200.16.10:37174). Aug 5 22:12:10.919760 sshd[2274]: Accepted publickey for core from 10.200.16.10 port 37174 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:10.921557 sshd[2274]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:10.926516 systemd-logind[1663]: New session 7 of user core. Aug 5 22:12:10.934939 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:12:11.530966 sudo[2277]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:12:11.531370 sudo[2277]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:11.560187 sudo[2277]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:11.663109 sshd[2274]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:11.667087 systemd[1]: sshd@4-10.200.8.39:22-10.200.16.10:37174.service: Deactivated successfully. Aug 5 22:12:11.669464 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:12:11.671042 systemd-logind[1663]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:12:11.672082 systemd-logind[1663]: Removed session 7. Aug 5 22:12:11.777913 systemd[1]: Started sshd@5-10.200.8.39:22-10.200.16.10:37186.service - OpenSSH per-connection server daemon (10.200.16.10:37186). Aug 5 22:12:12.433365 sshd[2282]: Accepted publickey for core from 10.200.16.10 port 37186 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:12.435227 sshd[2282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:12.439953 systemd-logind[1663]: New session 8 of user core. Aug 5 22:12:12.447963 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:12:12.590611 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 5 22:12:12.596224 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:12.803747 sudo[2289]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:12:12.804036 sudo[2289]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:12.965990 sudo[2289]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:12.974478 sudo[2288]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:12:12.975189 sudo[2288]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:12.997030 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:12.999634 auditctl[2292]: No rules Aug 5 22:12:13.000441 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:12:13.000822 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:13.016125 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:12:13.050045 augenrules[2313]: No rules Aug 5 22:12:13.052401 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:12:13.054030 sudo[2288]: pam_unix(sudo:session): session closed for user root Aug 5 22:12:13.062978 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:13.068067 (kubelet)[2319]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:13.109496 kubelet[2319]: E0805 22:12:13.109440 2319 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:13.111960 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:13.112177 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:13.157472 sshd[2282]: pam_unix(sshd:session): session closed for user core Aug 5 22:12:13.161011 systemd[1]: sshd@5-10.200.8.39:22-10.200.16.10:37186.service: Deactivated successfully. Aug 5 22:12:13.163178 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:12:13.164655 systemd-logind[1663]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:12:13.165688 systemd-logind[1663]: Removed session 8. Aug 5 22:12:13.270756 systemd[1]: Started sshd@6-10.200.8.39:22-10.200.16.10:37198.service - OpenSSH per-connection server daemon (10.200.16.10:37198). Aug 5 22:12:13.921877 sshd[2331]: Accepted publickey for core from 10.200.16.10 port 37198 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:12:13.923605 sshd[2331]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:12:13.929281 systemd-logind[1663]: New session 9 of user core. Aug 5 22:12:13.938936 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:12:14.282029 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:12:14.282377 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:12:15.056127 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:12:15.056272 (dockerd)[2343]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:12:15.817433 dockerd[2343]: time="2024-08-05T22:12:15.817370193Z" level=info msg="Starting up" Aug 5 22:12:15.854740 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1348978593-merged.mount: Deactivated successfully. Aug 5 22:12:15.937892 dockerd[2343]: time="2024-08-05T22:12:15.937846216Z" level=info msg="Loading containers: start." Aug 5 22:12:16.086827 kernel: Initializing XFRM netlink socket Aug 5 22:12:16.196616 systemd-networkd[1574]: docker0: Link UP Aug 5 22:12:16.367084 dockerd[2343]: time="2024-08-05T22:12:16.367033961Z" level=info msg="Loading containers: done." Aug 5 22:12:16.599220 dockerd[2343]: time="2024-08-05T22:12:16.599173016Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:12:16.599447 dockerd[2343]: time="2024-08-05T22:12:16.599417721Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:12:16.599563 dockerd[2343]: time="2024-08-05T22:12:16.599543124Z" level=info msg="Daemon has completed initialization" Aug 5 22:12:16.648420 dockerd[2343]: time="2024-08-05T22:12:16.648284585Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:12:16.648990 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:12:18.223778 containerd[1689]: time="2024-08-05T22:12:18.223731089Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:12:18.925218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903336317.mount: Deactivated successfully. Aug 5 22:12:21.272809 containerd[1689]: time="2024-08-05T22:12:21.272739792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:21.275843 containerd[1689]: time="2024-08-05T22:12:21.275763846Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527325" Aug 5 22:12:21.279615 containerd[1689]: time="2024-08-05T22:12:21.279555314Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:21.283559 containerd[1689]: time="2024-08-05T22:12:21.283522886Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:21.284768 containerd[1689]: time="2024-08-05T22:12:21.284541904Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 3.060763915s" Aug 5 22:12:21.284768 containerd[1689]: time="2024-08-05T22:12:21.284584105Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 5 22:12:21.305575 containerd[1689]: time="2024-08-05T22:12:21.305544982Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:12:23.340492 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 5 22:12:23.349032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:23.491005 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:23.491932 (kubelet)[2541]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:23.988419 kubelet[2541]: E0805 22:12:23.988356 2541 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:23.991131 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:23.991349 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:25.047618 containerd[1689]: time="2024-08-05T22:12:25.047547446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:25.054028 containerd[1689]: time="2024-08-05T22:12:25.053958661Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847075" Aug 5 22:12:25.061970 containerd[1689]: time="2024-08-05T22:12:25.061912704Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:25.069429 containerd[1689]: time="2024-08-05T22:12:25.069365438Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:25.070486 containerd[1689]: time="2024-08-05T22:12:25.070444658Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 3.764865176s" Aug 5 22:12:25.070581 containerd[1689]: time="2024-08-05T22:12:25.070491059Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 5 22:12:25.092692 containerd[1689]: time="2024-08-05T22:12:25.092647357Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:12:26.587008 containerd[1689]: time="2024-08-05T22:12:26.586946118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:26.590979 containerd[1689]: time="2024-08-05T22:12:26.590829887Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097303" Aug 5 22:12:26.597507 containerd[1689]: time="2024-08-05T22:12:26.597367505Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:26.604117 containerd[1689]: time="2024-08-05T22:12:26.604057325Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:26.605248 containerd[1689]: time="2024-08-05T22:12:26.605076744Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.512386086s" Aug 5 22:12:26.605248 containerd[1689]: time="2024-08-05T22:12:26.605115144Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 5 22:12:26.626589 containerd[1689]: time="2024-08-05T22:12:26.626562730Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:12:28.084475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3743834095.mount: Deactivated successfully. Aug 5 22:12:28.538280 containerd[1689]: time="2024-08-05T22:12:28.538222193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:28.540560 containerd[1689]: time="2024-08-05T22:12:28.540425933Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303777" Aug 5 22:12:28.545327 containerd[1689]: time="2024-08-05T22:12:28.545260619Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:28.550465 containerd[1689]: time="2024-08-05T22:12:28.550411412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:28.551641 containerd[1689]: time="2024-08-05T22:12:28.551179526Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 1.924580696s" Aug 5 22:12:28.551641 containerd[1689]: time="2024-08-05T22:12:28.551231827Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 5 22:12:28.571999 containerd[1689]: time="2024-08-05T22:12:28.571961999Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:12:29.247034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2291714128.mount: Deactivated successfully. Aug 5 22:12:29.273607 containerd[1689]: time="2024-08-05T22:12:29.273560858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:29.277251 containerd[1689]: time="2024-08-05T22:12:29.277107020Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Aug 5 22:12:29.289291 containerd[1689]: time="2024-08-05T22:12:29.289232133Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:29.295506 containerd[1689]: time="2024-08-05T22:12:29.295440842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:29.296581 containerd[1689]: time="2024-08-05T22:12:29.296409059Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 724.408059ms" Aug 5 22:12:29.296581 containerd[1689]: time="2024-08-05T22:12:29.296454760Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:12:29.317753 containerd[1689]: time="2024-08-05T22:12:29.317725833Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:12:30.012763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount44551456.mount: Deactivated successfully. Aug 5 22:12:33.254064 containerd[1689]: time="2024-08-05T22:12:33.254002479Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:33.259121 containerd[1689]: time="2024-08-05T22:12:33.258978266Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Aug 5 22:12:33.262820 containerd[1689]: time="2024-08-05T22:12:33.262749632Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:33.267484 containerd[1689]: time="2024-08-05T22:12:33.267422414Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:33.268886 containerd[1689]: time="2024-08-05T22:12:33.268722137Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 3.950961003s" Aug 5 22:12:33.268886 containerd[1689]: time="2024-08-05T22:12:33.268765837Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:12:33.289871 containerd[1689]: time="2024-08-05T22:12:33.289844407Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:12:33.957545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925759592.mount: Deactivated successfully. Aug 5 22:12:34.090642 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Aug 5 22:12:34.096019 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:34.187208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:34.191496 (kubelet)[2646]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:12:35.819641 kubelet[2646]: E0805 22:12:34.820444 2646 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:12:34.822959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:12:34.823181 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:12:36.435485 containerd[1689]: time="2024-08-05T22:12:36.435418483Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:36.440038 containerd[1689]: time="2024-08-05T22:12:36.439807260Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Aug 5 22:12:36.445055 containerd[1689]: time="2024-08-05T22:12:36.444993451Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:36.450659 containerd[1689]: time="2024-08-05T22:12:36.450522348Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:12:36.452424 containerd[1689]: time="2024-08-05T22:12:36.451871272Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 3.161938063s" Aug 5 22:12:36.452424 containerd[1689]: time="2024-08-05T22:12:36.451911672Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 5 22:12:39.380762 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:39.387076 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:39.409653 systemd[1]: Reloading requested from client PID 2720 ('systemctl') (unit session-9.scope)... Aug 5 22:12:39.409668 systemd[1]: Reloading... Aug 5 22:12:39.496817 zram_generator::config[2754]: No configuration found. Aug 5 22:12:39.627655 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:39.709682 systemd[1]: Reloading finished in 299 ms. Aug 5 22:12:39.776092 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:12:39.776259 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:12:39.776607 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:39.784160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:42.182683 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:42.193500 (kubelet)[2824]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:12:42.255616 kubelet[2824]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:12:42.255616 kubelet[2824]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:12:42.256033 kubelet[2824]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:12:42.256033 kubelet[2824]: I0805 22:12:42.255720 2824 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:12:42.621438 kubelet[2824]: I0805 22:12:42.621401 2824 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:12:42.621438 kubelet[2824]: I0805 22:12:42.621432 2824 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:12:42.621734 kubelet[2824]: I0805 22:12:42.621711 2824 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:12:43.530337 kubelet[2824]: I0805 22:12:43.530167 2824 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:12:43.531160 kubelet[2824]: E0805 22:12:43.530976 2824 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.539411 kubelet[2824]: I0805 22:12:43.539380 2824 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:12:43.539661 kubelet[2824]: I0805 22:12:43.539639 2824 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:12:43.539874 kubelet[2824]: I0805 22:12:43.539845 2824 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:12:43.540032 kubelet[2824]: I0805 22:12:43.539881 2824 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:12:43.540032 kubelet[2824]: I0805 22:12:43.539896 2824 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:12:43.540637 kubelet[2824]: I0805 22:12:43.540613 2824 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:12:43.541849 kubelet[2824]: I0805 22:12:43.541830 2824 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:12:43.541927 kubelet[2824]: I0805 22:12:43.541855 2824 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:12:43.541927 kubelet[2824]: I0805 22:12:43.541884 2824 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:12:43.541927 kubelet[2824]: I0805 22:12:43.541905 2824 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:12:43.545564 kubelet[2824]: W0805 22:12:43.545158 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.545564 kubelet[2824]: E0805 22:12:43.545211 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.545564 kubelet[2824]: W0805 22:12:43.545505 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-a-9e76a2f9cc&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.545564 kubelet[2824]: E0805 22:12:43.545545 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-a-9e76a2f9cc&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.546376 kubelet[2824]: I0805 22:12:43.546081 2824 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:12:43.548770 kubelet[2824]: W0805 22:12:43.548745 2824 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:12:43.549377 kubelet[2824]: I0805 22:12:43.549285 2824 server.go:1232] "Started kubelet" Aug 5 22:12:43.556795 kubelet[2824]: I0805 22:12:43.554937 2824 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:12:43.556795 kubelet[2824]: I0805 22:12:43.556127 2824 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:12:43.557547 kubelet[2824]: E0805 22:12:43.557532 2824 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:12:43.557644 kubelet[2824]: E0805 22:12:43.557635 2824 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:12:43.557712 kubelet[2824]: I0805 22:12:43.557649 2824 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:12:43.558039 kubelet[2824]: I0805 22:12:43.558017 2824 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:12:43.558460 kubelet[2824]: I0805 22:12:43.558439 2824 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:12:43.559324 kubelet[2824]: E0805 22:12:43.559212 2824 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.0-a-9e76a2f9cc.17e8f4c0244ed821", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.0-a-9e76a2f9cc", UID:"ci-3975.2.0-a-9e76a2f9cc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.0-a-9e76a2f9cc"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 12, 43, 549259809, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 12, 43, 549259809, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.0-a-9e76a2f9cc"}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:12:43.564998 kubelet[2824]: I0805 22:12:43.564977 2824 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:12:43.566671 kubelet[2824]: E0805 22:12:43.566652 2824 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-a-9e76a2f9cc?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="200ms" Aug 5 22:12:43.567846 kubelet[2824]: I0805 22:12:43.567832 2824 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:12:43.568498 kubelet[2824]: W0805 22:12:43.568455 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.568994 kubelet[2824]: E0805 22:12:43.568970 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:43.569423 kubelet[2824]: I0805 22:12:43.569399 2824 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:12:43.626147 kubelet[2824]: I0805 22:12:43.626117 2824 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:12:43.626147 kubelet[2824]: I0805 22:12:43.626151 2824 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:12:43.626317 kubelet[2824]: I0805 22:12:43.626174 2824 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:12:43.667033 kubelet[2824]: I0805 22:12:43.666959 2824 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:43.667365 kubelet[2824]: E0805 22:12:43.667332 2824 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:43.768275 kubelet[2824]: E0805 22:12:43.768238 2824 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-a-9e76a2f9cc?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="400ms" Aug 5 22:12:43.869854 kubelet[2824]: I0805 22:12:43.869820 2824 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:43.870248 kubelet[2824]: E0805 22:12:43.870223 2824 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:44.169721 kubelet[2824]: E0805 22:12:44.169568 2824 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-a-9e76a2f9cc?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="800ms" Aug 5 22:12:44.272502 kubelet[2824]: I0805 22:12:44.272462 2824 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:44.272950 kubelet[2824]: E0805 22:12:44.272918 2824 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:44.942368 kubelet[2824]: I0805 22:12:44.941425 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:12:44.943387 kubelet[2824]: I0805 22:12:44.943358 2824 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:12:44.943494 kubelet[2824]: I0805 22:12:44.943397 2824 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:12:44.943494 kubelet[2824]: I0805 22:12:44.943423 2824 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:12:44.943494 kubelet[2824]: E0805 22:12:44.943480 2824 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:12:44.945267 kubelet[2824]: W0805 22:12:44.945238 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:44.945351 kubelet[2824]: E0805 22:12:44.945281 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:44.961335 kubelet[2824]: W0805 22:12:44.961180 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-a-9e76a2f9cc&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:44.961335 kubelet[2824]: E0805 22:12:44.961336 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-a-9e76a2f9cc&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:44.970581 kubelet[2824]: E0805 22:12:44.970556 2824 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-a-9e76a2f9cc?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="1.6s" Aug 5 22:12:45.044260 kubelet[2824]: E0805 22:12:45.044210 2824 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:12:45.277887 kubelet[2824]: I0805 22:12:45.075472 2824 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.277887 kubelet[2824]: E0805 22:12:45.075840 2824 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.277887 kubelet[2824]: W0805 22:12:45.078223 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:45.277887 kubelet[2824]: E0805 22:12:45.078250 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:45.277887 kubelet[2824]: W0805 22:12:45.139972 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:45.277887 kubelet[2824]: E0805 22:12:45.140038 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:45.277887 kubelet[2824]: E0805 22:12:45.244658 2824 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:12:45.279383 kubelet[2824]: I0805 22:12:45.279258 2824 policy_none.go:49] "None policy: Start" Aug 5 22:12:45.280415 kubelet[2824]: I0805 22:12:45.280387 2824 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:12:45.280558 kubelet[2824]: I0805 22:12:45.280425 2824 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:12:45.292983 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:12:45.303333 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:12:45.306908 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:12:45.318107 kubelet[2824]: I0805 22:12:45.317681 2824 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:12:45.318107 kubelet[2824]: I0805 22:12:45.317979 2824 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:12:45.318970 kubelet[2824]: E0805 22:12:45.318818 2824 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:45.429502 kubelet[2824]: E0805 22:12:45.429342 2824 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3975.2.0-a-9e76a2f9cc.17e8f4c0244ed821", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3975.2.0-a-9e76a2f9cc", UID:"ci-3975.2.0-a-9e76a2f9cc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3975.2.0-a-9e76a2f9cc"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 12, 43, 549259809, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 12, 43, 549259809, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3975.2.0-a-9e76a2f9cc"}': 'Post "https://10.200.8.39:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.39:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:12:45.645340 kubelet[2824]: I0805 22:12:45.645242 2824 topology_manager.go:215] "Topology Admit Handler" podUID="861694fe53cbf136cdabf581c3206494" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.647588 kubelet[2824]: I0805 22:12:45.647558 2824 topology_manager.go:215] "Topology Admit Handler" podUID="5f3196eaa9c1745da105569b2e076456" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.649457 kubelet[2824]: I0805 22:12:45.649238 2824 topology_manager.go:215] "Topology Admit Handler" podUID="cd5abc6bcfbbb9e48610554fce61bf82" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.656187 systemd[1]: Created slice kubepods-burstable-pod861694fe53cbf136cdabf581c3206494.slice - libcontainer container kubepods-burstable-pod861694fe53cbf136cdabf581c3206494.slice. Aug 5 22:12:45.661672 kubelet[2824]: E0805 22:12:45.661643 2824 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:45.677453 systemd[1]: Created slice kubepods-burstable-pod5f3196eaa9c1745da105569b2e076456.slice - libcontainer container kubepods-burstable-pod5f3196eaa9c1745da105569b2e076456.slice. Aug 5 22:12:45.684233 kubelet[2824]: I0805 22:12:45.683999 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/861694fe53cbf136cdabf581c3206494-ca-certs\") pod \"kube-apiserver-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"861694fe53cbf136cdabf581c3206494\") " pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684233 kubelet[2824]: I0805 22:12:45.684141 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/861694fe53cbf136cdabf581c3206494-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"861694fe53cbf136cdabf581c3206494\") " pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684233 kubelet[2824]: I0805 22:12:45.684245 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684233 kubelet[2824]: I0805 22:12:45.684342 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684751 kubelet[2824]: I0805 22:12:45.684477 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684751 kubelet[2824]: I0805 22:12:45.684574 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684751 kubelet[2824]: I0805 22:12:45.684621 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd5abc6bcfbbb9e48610554fce61bf82-kubeconfig\") pod \"kube-scheduler-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"cd5abc6bcfbbb9e48610554fce61bf82\") " pod="kube-system/kube-scheduler-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684751 kubelet[2824]: I0805 22:12:45.684660 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/861694fe53cbf136cdabf581c3206494-k8s-certs\") pod \"kube-apiserver-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"861694fe53cbf136cdabf581c3206494\") " pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.684751 kubelet[2824]: I0805 22:12:45.684689 2824 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-ca-certs\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:45.690390 systemd[1]: Created slice kubepods-burstable-podcd5abc6bcfbbb9e48610554fce61bf82.slice - libcontainer container kubepods-burstable-podcd5abc6bcfbbb9e48610554fce61bf82.slice. Aug 5 22:12:45.977317 containerd[1689]: time="2024-08-05T22:12:45.977162968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.0-a-9e76a2f9cc,Uid:861694fe53cbf136cdabf581c3206494,Namespace:kube-system,Attempt:0,}" Aug 5 22:12:45.991970 containerd[1689]: time="2024-08-05T22:12:45.991935441Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc,Uid:5f3196eaa9c1745da105569b2e076456,Namespace:kube-system,Attempt:0,}" Aug 5 22:12:45.993997 containerd[1689]: time="2024-08-05T22:12:45.993961678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.0-a-9e76a2f9cc,Uid:cd5abc6bcfbbb9e48610554fce61bf82,Namespace:kube-system,Attempt:0,}" Aug 5 22:12:46.374105 kubelet[2824]: W0805 22:12:46.374069 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:46.374105 kubelet[2824]: E0805 22:12:46.374113 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:46.571355 kubelet[2824]: E0805 22:12:46.571315 2824 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3975.2.0-a-9e76a2f9cc?timeout=10s\": dial tcp 10.200.8.39:6443: connect: connection refused" interval="3.2s" Aug 5 22:12:46.603969 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3158279550.mount: Deactivated successfully. Aug 5 22:12:46.678830 kubelet[2824]: I0805 22:12:46.678681 2824 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:46.679183 kubelet[2824]: E0805 22:12:46.679150 2824 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.39:6443/api/v1/nodes\": dial tcp 10.200.8.39:6443: connect: connection refused" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:46.759108 containerd[1689]: time="2024-08-05T22:12:46.759055691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:12:46.761921 containerd[1689]: time="2024-08-05T22:12:46.761719740Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 5 22:12:46.766097 containerd[1689]: time="2024-08-05T22:12:46.765971018Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:12:46.769938 containerd[1689]: time="2024-08-05T22:12:46.769860490Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:12:46.773628 containerd[1689]: time="2024-08-05T22:12:46.773582858Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:12:46.777038 containerd[1689]: time="2024-08-05T22:12:46.776997521Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:12:46.780012 containerd[1689]: time="2024-08-05T22:12:46.779673371Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:12:46.783667 containerd[1689]: time="2024-08-05T22:12:46.783616844Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:12:46.784956 containerd[1689]: time="2024-08-05T22:12:46.784390758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 792.358116ms" Aug 5 22:12:46.785966 containerd[1689]: time="2024-08-05T22:12:46.785932886Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 808.638515ms" Aug 5 22:12:46.791691 containerd[1689]: time="2024-08-05T22:12:46.791659492Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 797.623413ms" Aug 5 22:12:46.986445 kubelet[2824]: W0805 22:12:46.986337 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:46.986445 kubelet[2824]: E0805 22:12:46.986382 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:47.293321 containerd[1689]: time="2024-08-05T22:12:47.292219125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:12:47.293321 containerd[1689]: time="2024-08-05T22:12:47.292293826Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:12:47.293321 containerd[1689]: time="2024-08-05T22:12:47.292332827Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:12:47.293321 containerd[1689]: time="2024-08-05T22:12:47.292351827Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:12:47.298409 containerd[1689]: time="2024-08-05T22:12:47.297883929Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:12:47.298409 containerd[1689]: time="2024-08-05T22:12:47.297956231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:12:47.298409 containerd[1689]: time="2024-08-05T22:12:47.297977431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:12:47.298409 containerd[1689]: time="2024-08-05T22:12:47.297991331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:12:47.299031 containerd[1689]: time="2024-08-05T22:12:47.298833947Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:12:47.299031 containerd[1689]: time="2024-08-05T22:12:47.298904648Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:12:47.299031 containerd[1689]: time="2024-08-05T22:12:47.298938449Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:12:47.299031 containerd[1689]: time="2024-08-05T22:12:47.298958349Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:12:47.339135 systemd[1]: Started cri-containerd-4d32fe4933411fc6741cb19a1bb85c9cf12c3d3ad65abc896fc255e5218f5931.scope - libcontainer container 4d32fe4933411fc6741cb19a1bb85c9cf12c3d3ad65abc896fc255e5218f5931. Aug 5 22:12:47.347956 kubelet[2824]: W0805 22:12:47.347917 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-a-9e76a2f9cc&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:47.348132 kubelet[2824]: E0805 22:12:47.348118 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3975.2.0-a-9e76a2f9cc&limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:47.350964 systemd[1]: Started cri-containerd-195411d64f75ba17f1c0aa1d3b220b2f271fcd9daea661fa2ac6cc2d9de47677.scope - libcontainer container 195411d64f75ba17f1c0aa1d3b220b2f271fcd9daea661fa2ac6cc2d9de47677. Aug 5 22:12:47.353891 systemd[1]: Started cri-containerd-1dd8597bf3cfe4a6a59690d663efae13d4ae25e52f461a693f35ebe89450741c.scope - libcontainer container 1dd8597bf3cfe4a6a59690d663efae13d4ae25e52f461a693f35ebe89450741c. Aug 5 22:12:47.422990 containerd[1689]: time="2024-08-05T22:12:47.422520828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3975.2.0-a-9e76a2f9cc,Uid:861694fe53cbf136cdabf581c3206494,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d32fe4933411fc6741cb19a1bb85c9cf12c3d3ad65abc896fc255e5218f5931\"" Aug 5 22:12:47.431708 containerd[1689]: time="2024-08-05T22:12:47.431661197Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3975.2.0-a-9e76a2f9cc,Uid:cd5abc6bcfbbb9e48610554fce61bf82,Namespace:kube-system,Attempt:0,} returns sandbox id \"195411d64f75ba17f1c0aa1d3b220b2f271fcd9daea661fa2ac6cc2d9de47677\"" Aug 5 22:12:47.432573 containerd[1689]: time="2024-08-05T22:12:47.432535213Z" level=info msg="CreateContainer within sandbox \"4d32fe4933411fc6741cb19a1bb85c9cf12c3d3ad65abc896fc255e5218f5931\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:12:47.435763 containerd[1689]: time="2024-08-05T22:12:47.435735272Z" level=info msg="CreateContainer within sandbox \"195411d64f75ba17f1c0aa1d3b220b2f271fcd9daea661fa2ac6cc2d9de47677\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:12:47.438594 containerd[1689]: time="2024-08-05T22:12:47.438517223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc,Uid:5f3196eaa9c1745da105569b2e076456,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dd8597bf3cfe4a6a59690d663efae13d4ae25e52f461a693f35ebe89450741c\"" Aug 5 22:12:47.440711 containerd[1689]: time="2024-08-05T22:12:47.440678763Z" level=info msg="CreateContainer within sandbox \"1dd8597bf3cfe4a6a59690d663efae13d4ae25e52f461a693f35ebe89450741c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:12:47.536302 containerd[1689]: time="2024-08-05T22:12:47.536251226Z" level=info msg="CreateContainer within sandbox \"195411d64f75ba17f1c0aa1d3b220b2f271fcd9daea661fa2ac6cc2d9de47677\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d17a5e0f0dcf4e5eac6d7ecbf5982c321821292101d5f19cb1fe34eb336ef8f0\"" Aug 5 22:12:47.539651 containerd[1689]: time="2024-08-05T22:12:47.539586788Z" level=info msg="CreateContainer within sandbox \"4d32fe4933411fc6741cb19a1bb85c9cf12c3d3ad65abc896fc255e5218f5931\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"09f0d2c32346bd7b96bb7631ccf5cedda487722271a2dc2d80c8badb36fb5971\"" Aug 5 22:12:47.539975 containerd[1689]: time="2024-08-05T22:12:47.539841692Z" level=info msg="StartContainer for \"d17a5e0f0dcf4e5eac6d7ecbf5982c321821292101d5f19cb1fe34eb336ef8f0\"" Aug 5 22:12:47.544861 containerd[1689]: time="2024-08-05T22:12:47.543744464Z" level=info msg="CreateContainer within sandbox \"1dd8597bf3cfe4a6a59690d663efae13d4ae25e52f461a693f35ebe89450741c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f07730c6484f6bbb9fbfccef451f8c0873f416c987ad152d9710cb3c3fcb72a9\"" Aug 5 22:12:47.544861 containerd[1689]: time="2024-08-05T22:12:47.543926568Z" level=info msg="StartContainer for \"09f0d2c32346bd7b96bb7631ccf5cedda487722271a2dc2d80c8badb36fb5971\"" Aug 5 22:12:47.553867 containerd[1689]: time="2024-08-05T22:12:47.553840050Z" level=info msg="StartContainer for \"f07730c6484f6bbb9fbfccef451f8c0873f416c987ad152d9710cb3c3fcb72a9\"" Aug 5 22:12:47.585995 systemd[1]: Started cri-containerd-09f0d2c32346bd7b96bb7631ccf5cedda487722271a2dc2d80c8badb36fb5971.scope - libcontainer container 09f0d2c32346bd7b96bb7631ccf5cedda487722271a2dc2d80c8badb36fb5971. Aug 5 22:12:47.598062 systemd[1]: Started cri-containerd-d17a5e0f0dcf4e5eac6d7ecbf5982c321821292101d5f19cb1fe34eb336ef8f0.scope - libcontainer container d17a5e0f0dcf4e5eac6d7ecbf5982c321821292101d5f19cb1fe34eb336ef8f0. Aug 5 22:12:47.630943 systemd[1]: Started cri-containerd-f07730c6484f6bbb9fbfccef451f8c0873f416c987ad152d9710cb3c3fcb72a9.scope - libcontainer container f07730c6484f6bbb9fbfccef451f8c0873f416c987ad152d9710cb3c3fcb72a9. Aug 5 22:12:47.707598 containerd[1689]: time="2024-08-05T22:12:47.707171879Z" level=info msg="StartContainer for \"d17a5e0f0dcf4e5eac6d7ecbf5982c321821292101d5f19cb1fe34eb336ef8f0\" returns successfully" Aug 5 22:12:47.707598 containerd[1689]: time="2024-08-05T22:12:47.707279181Z" level=info msg="StartContainer for \"09f0d2c32346bd7b96bb7631ccf5cedda487722271a2dc2d80c8badb36fb5971\" returns successfully" Aug 5 22:12:47.723939 containerd[1689]: time="2024-08-05T22:12:47.723892987Z" level=info msg="StartContainer for \"f07730c6484f6bbb9fbfccef451f8c0873f416c987ad152d9710cb3c3fcb72a9\" returns successfully" Aug 5 22:12:47.789351 kubelet[2824]: W0805 22:12:47.789293 2824 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:47.789351 kubelet[2824]: E0805 22:12:47.789357 2824 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.39:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.39:6443: connect: connection refused Aug 5 22:12:49.882325 kubelet[2824]: I0805 22:12:49.882289 2824 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:49.974594 kubelet[2824]: E0805 22:12:49.974547 2824 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3975.2.0-a-9e76a2f9cc\" not found" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:50.016813 kubelet[2824]: I0805 22:12:50.015648 2824 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:50.034878 kubelet[2824]: E0805 22:12:50.034842 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.135985 kubelet[2824]: E0805 22:12:50.135846 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.236535 kubelet[2824]: E0805 22:12:50.236458 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.337151 kubelet[2824]: E0805 22:12:50.337108 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.437827 kubelet[2824]: E0805 22:12:50.437698 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.538748 kubelet[2824]: E0805 22:12:50.538656 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.639456 kubelet[2824]: E0805 22:12:50.639400 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.740350 kubelet[2824]: E0805 22:12:50.740114 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.840729 kubelet[2824]: E0805 22:12:50.840644 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:50.941156 kubelet[2824]: E0805 22:12:50.941114 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.041853 kubelet[2824]: E0805 22:12:51.041667 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.142850 kubelet[2824]: E0805 22:12:51.142511 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.243730 kubelet[2824]: E0805 22:12:51.243348 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.344260 kubelet[2824]: E0805 22:12:51.344186 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.444872 kubelet[2824]: E0805 22:12:51.444824 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.545605 kubelet[2824]: E0805 22:12:51.545511 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.646413 kubelet[2824]: E0805 22:12:51.646271 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.746852 kubelet[2824]: E0805 22:12:51.746810 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.847397 kubelet[2824]: E0805 22:12:51.847361 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:51.948114 kubelet[2824]: E0805 22:12:51.948011 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:52.049161 kubelet[2824]: E0805 22:12:52.049115 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:52.149922 kubelet[2824]: E0805 22:12:52.149840 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:52.250681 kubelet[2824]: E0805 22:12:52.250543 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:52.351198 kubelet[2824]: E0805 22:12:52.351150 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:52.451746 kubelet[2824]: E0805 22:12:52.451689 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:52.552728 kubelet[2824]: E0805 22:12:52.552601 2824 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3975.2.0-a-9e76a2f9cc\" not found" Aug 5 22:12:53.372622 kubelet[2824]: W0805 22:12:53.246221 2824 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:12:53.550112 kubelet[2824]: I0805 22:12:53.550025 2824 apiserver.go:52] "Watching apiserver" Aug 5 22:12:53.568326 kubelet[2824]: I0805 22:12:53.568289 2824 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:12:54.637021 systemd[1]: Reloading requested from client PID 3096 ('systemctl') (unit session-9.scope)... Aug 5 22:12:54.637035 systemd[1]: Reloading... Aug 5 22:12:54.719863 zram_generator::config[3133]: No configuration found. Aug 5 22:12:54.869935 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:12:54.982710 systemd[1]: Reloading finished in 345 ms. Aug 5 22:12:55.026615 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:55.027352 kubelet[2824]: I0805 22:12:55.026892 2824 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:12:55.036177 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:12:55.036463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:55.041096 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:12:58.250271 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:12:58.256620 (kubelet)[3200]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:12:58.302247 kubelet[3200]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:12:58.302247 kubelet[3200]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:12:58.302247 kubelet[3200]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:12:58.302739 kubelet[3200]: I0805 22:12:58.302320 3200 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:12:58.306761 kubelet[3200]: I0805 22:12:58.306730 3200 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:12:58.306761 kubelet[3200]: I0805 22:12:58.306754 3200 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:12:58.307010 kubelet[3200]: I0805 22:12:58.306989 3200 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:12:58.308379 kubelet[3200]: I0805 22:12:58.308354 3200 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:12:58.309445 kubelet[3200]: I0805 22:12:58.309288 3200 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:12:58.314943 kubelet[3200]: I0805 22:12:58.314889 3200 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:12:58.315122 kubelet[3200]: I0805 22:12:58.315105 3200 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:12:58.315280 kubelet[3200]: I0805 22:12:58.315262 3200 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:12:58.315408 kubelet[3200]: I0805 22:12:58.315288 3200 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:12:58.315408 kubelet[3200]: I0805 22:12:58.315301 3200 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:12:58.315408 kubelet[3200]: I0805 22:12:58.315344 3200 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:12:58.315524 kubelet[3200]: I0805 22:12:58.315454 3200 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:12:58.315524 kubelet[3200]: I0805 22:12:58.315471 3200 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:12:58.315524 kubelet[3200]: I0805 22:12:58.315500 3200 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:12:58.315524 kubelet[3200]: I0805 22:12:58.315517 3200 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:12:58.319926 kubelet[3200]: I0805 22:12:58.319888 3200 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.17" apiVersion="v1" Aug 5 22:12:58.320482 kubelet[3200]: I0805 22:12:58.320461 3200 server.go:1232] "Started kubelet" Aug 5 22:12:58.327391 kubelet[3200]: I0805 22:12:58.327097 3200 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:12:58.328483 kubelet[3200]: I0805 22:12:58.328468 3200 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:12:58.331813 kubelet[3200]: E0805 22:12:58.330275 3200 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:12:58.331813 kubelet[3200]: E0805 22:12:58.330306 3200 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:12:58.331813 kubelet[3200]: I0805 22:12:58.330719 3200 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:12:58.331813 kubelet[3200]: I0805 22:12:58.330918 3200 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:12:58.335677 kubelet[3200]: I0805 22:12:58.334726 3200 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:12:58.345807 kubelet[3200]: I0805 22:12:58.343970 3200 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:12:58.345807 kubelet[3200]: I0805 22:12:58.344104 3200 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:12:58.345807 kubelet[3200]: I0805 22:12:58.344246 3200 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:12:58.348748 kubelet[3200]: I0805 22:12:58.348517 3200 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:12:58.349901 kubelet[3200]: I0805 22:12:58.349757 3200 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:12:58.349901 kubelet[3200]: I0805 22:12:58.349778 3200 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:12:58.349901 kubelet[3200]: I0805 22:12:58.349814 3200 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:12:58.349901 kubelet[3200]: E0805 22:12:58.349876 3200 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:12:58.421815 kubelet[3200]: I0805 22:12:58.421707 3200 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:12:58.421815 kubelet[3200]: I0805 22:12:58.421727 3200 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:12:58.421815 kubelet[3200]: I0805 22:12:58.421743 3200 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:12:58.422053 kubelet[3200]: I0805 22:12:58.421910 3200 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:12:58.422053 kubelet[3200]: I0805 22:12:58.421934 3200 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:12:58.422053 kubelet[3200]: I0805 22:12:58.421944 3200 policy_none.go:49] "None policy: Start" Aug 5 22:12:58.422583 kubelet[3200]: I0805 22:12:58.422562 3200 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:12:58.422583 kubelet[3200]: I0805 22:12:58.422587 3200 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:12:58.422825 kubelet[3200]: I0805 22:12:58.422771 3200 state_mem.go:75] "Updated machine memory state" Aug 5 22:12:58.426632 kubelet[3200]: I0805 22:12:58.426606 3200 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:12:58.427538 kubelet[3200]: I0805 22:12:58.427180 3200 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:12:58.450746 kubelet[3200]: I0805 22:12:58.450698 3200 topology_manager.go:215] "Topology Admit Handler" podUID="861694fe53cbf136cdabf581c3206494" podNamespace="kube-system" podName="kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.451482 kubelet[3200]: I0805 22:12:58.451103 3200 topology_manager.go:215] "Topology Admit Handler" podUID="5f3196eaa9c1745da105569b2e076456" podNamespace="kube-system" podName="kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.451482 kubelet[3200]: I0805 22:12:58.451270 3200 topology_manager.go:215] "Topology Admit Handler" podUID="cd5abc6bcfbbb9e48610554fce61bf82" podNamespace="kube-system" podName="kube-scheduler-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.455578 kubelet[3200]: W0805 22:12:58.455464 3200 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:12:58.459533 kubelet[3200]: W0805 22:12:58.459501 3200 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:12:58.462844 kubelet[3200]: W0805 22:12:58.462831 3200 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:12:58.462965 kubelet[3200]: E0805 22:12:58.462933 3200 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.2.0-a-9e76a2f9cc\" already exists" pod="kube-system/kube-scheduler-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.534297 kubelet[3200]: I0805 22:12:58.534165 3200 kubelet_node_status.go:70] "Attempting to register node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.544745 kubelet[3200]: I0805 22:12:58.544693 3200 kubelet_node_status.go:108] "Node was previously registered" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.544908 kubelet[3200]: I0805 22:12:58.544762 3200 kubelet_node_status.go:73] "Successfully registered node" node="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.645653 kubelet[3200]: I0805 22:12:58.645607 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/cd5abc6bcfbbb9e48610554fce61bf82-kubeconfig\") pod \"kube-scheduler-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"cd5abc6bcfbbb9e48610554fce61bf82\") " pod="kube-system/kube-scheduler-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.645893 kubelet[3200]: I0805 22:12:58.645709 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/861694fe53cbf136cdabf581c3206494-ca-certs\") pod \"kube-apiserver-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"861694fe53cbf136cdabf581c3206494\") " pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.645893 kubelet[3200]: I0805 22:12:58.645770 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/861694fe53cbf136cdabf581c3206494-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"861694fe53cbf136cdabf581c3206494\") " pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.645893 kubelet[3200]: I0805 22:12:58.645849 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-kubeconfig\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.645893 kubelet[3200]: I0805 22:12:58.645893 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.646131 kubelet[3200]: I0805 22:12:58.646003 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/861694fe53cbf136cdabf581c3206494-k8s-certs\") pod \"kube-apiserver-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"861694fe53cbf136cdabf581c3206494\") " pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.646131 kubelet[3200]: I0805 22:12:58.646038 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-ca-certs\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.646230 kubelet[3200]: I0805 22:12:58.646103 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-flexvolume-dir\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:58.646230 kubelet[3200]: I0805 22:12:58.646225 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f3196eaa9c1745da105569b2e076456-k8s-certs\") pod \"kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc\" (UID: \"5f3196eaa9c1745da105569b2e076456\") " pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:59.317576 kubelet[3200]: I0805 22:12:59.317456 3200 apiserver.go:52] "Watching apiserver" Aug 5 22:12:59.345015 kubelet[3200]: I0805 22:12:59.344892 3200 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:12:59.394650 kubelet[3200]: W0805 22:12:59.393970 3200 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:12:59.394650 kubelet[3200]: E0805 22:12:59.394128 3200 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3975.2.0-a-9e76a2f9cc\" already exists" pod="kube-system/kube-scheduler-ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:12:59.401397 kubelet[3200]: I0805 22:12:59.401054 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3975.2.0-a-9e76a2f9cc" podStartSLOduration=1.400979526 podCreationTimestamp="2024-08-05 22:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:12:59.394349505 +0000 UTC m=+1.132784947" watchObservedRunningTime="2024-08-05 22:12:59.400979526 +0000 UTC m=+1.139414968" Aug 5 22:13:03.281581 sudo[2334]: pam_unix(sudo:session): session closed for user root Aug 5 22:13:03.386825 sshd[2331]: pam_unix(sshd:session): session closed for user core Aug 5 22:13:03.390542 systemd[1]: sshd@6-10.200.8.39:22-10.200.16.10:37198.service: Deactivated successfully. Aug 5 22:13:03.393193 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:13:03.393609 systemd[1]: session-9.scope: Consumed 4.670s CPU time, 138.8M memory peak, 0B memory swap peak. Aug 5 22:13:03.395405 systemd-logind[1663]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:13:03.396609 systemd-logind[1663]: Removed session 9. Aug 5 22:13:05.910752 kubelet[3200]: I0805 22:13:05.910680 3200 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:13:05.912668 kubelet[3200]: I0805 22:13:05.911419 3200 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:13:05.912722 containerd[1689]: time="2024-08-05T22:13:05.911222987Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:13:06.574046 kubelet[3200]: I0805 22:13:06.574000 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3975.2.0-a-9e76a2f9cc" podStartSLOduration=8.573950264 podCreationTimestamp="2024-08-05 22:12:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:12:59.408632166 +0000 UTC m=+1.147067508" watchObservedRunningTime="2024-08-05 22:13:06.573950264 +0000 UTC m=+8.312385706" Aug 5 22:13:06.574267 kubelet[3200]: I0805 22:13:06.574177 3200 topology_manager.go:215] "Topology Admit Handler" podUID="e49747eb-e9e5-4299-a842-1e0e7fb06329" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-8s28r" Aug 5 22:13:06.585026 systemd[1]: Created slice kubepods-besteffort-pode49747eb_e9e5_4299_a842_1e0e7fb06329.slice - libcontainer container kubepods-besteffort-pode49747eb_e9e5_4299_a842_1e0e7fb06329.slice. Aug 5 22:13:06.649000 kubelet[3200]: I0805 22:13:06.648712 3200 topology_manager.go:215] "Topology Admit Handler" podUID="c13ee1e4-d454-43e9-9591-f497ab0bd781" podNamespace="kube-system" podName="kube-proxy-fmswt" Aug 5 22:13:06.659185 systemd[1]: Created slice kubepods-besteffort-podc13ee1e4_d454_43e9_9591_f497ab0bd781.slice - libcontainer container kubepods-besteffort-podc13ee1e4_d454_43e9_9591_f497ab0bd781.slice. Aug 5 22:13:06.699322 kubelet[3200]: I0805 22:13:06.699209 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2swzl\" (UniqueName: \"kubernetes.io/projected/e49747eb-e9e5-4299-a842-1e0e7fb06329-kube-api-access-2swzl\") pod \"tigera-operator-76c4974c85-8s28r\" (UID: \"e49747eb-e9e5-4299-a842-1e0e7fb06329\") " pod="tigera-operator/tigera-operator-76c4974c85-8s28r" Aug 5 22:13:06.699751 kubelet[3200]: I0805 22:13:06.699374 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e49747eb-e9e5-4299-a842-1e0e7fb06329-var-lib-calico\") pod \"tigera-operator-76c4974c85-8s28r\" (UID: \"e49747eb-e9e5-4299-a842-1e0e7fb06329\") " pod="tigera-operator/tigera-operator-76c4974c85-8s28r" Aug 5 22:13:06.799988 kubelet[3200]: I0805 22:13:06.799948 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c13ee1e4-d454-43e9-9591-f497ab0bd781-xtables-lock\") pod \"kube-proxy-fmswt\" (UID: \"c13ee1e4-d454-43e9-9591-f497ab0bd781\") " pod="kube-system/kube-proxy-fmswt" Aug 5 22:13:06.800154 kubelet[3200]: I0805 22:13:06.800020 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c13ee1e4-d454-43e9-9591-f497ab0bd781-kube-proxy\") pod \"kube-proxy-fmswt\" (UID: \"c13ee1e4-d454-43e9-9591-f497ab0bd781\") " pod="kube-system/kube-proxy-fmswt" Aug 5 22:13:06.800154 kubelet[3200]: I0805 22:13:06.800061 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtfbg\" (UniqueName: \"kubernetes.io/projected/c13ee1e4-d454-43e9-9591-f497ab0bd781-kube-api-access-dtfbg\") pod \"kube-proxy-fmswt\" (UID: \"c13ee1e4-d454-43e9-9591-f497ab0bd781\") " pod="kube-system/kube-proxy-fmswt" Aug 5 22:13:06.800154 kubelet[3200]: I0805 22:13:06.800106 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c13ee1e4-d454-43e9-9591-f497ab0bd781-lib-modules\") pod \"kube-proxy-fmswt\" (UID: \"c13ee1e4-d454-43e9-9591-f497ab0bd781\") " pod="kube-system/kube-proxy-fmswt" Aug 5 22:13:06.893972 containerd[1689]: time="2024-08-05T22:13:06.893911185Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8s28r,Uid:e49747eb-e9e5-4299-a842-1e0e7fb06329,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:13:06.963491 containerd[1689]: time="2024-08-05T22:13:06.963373914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fmswt,Uid:c13ee1e4-d454-43e9-9591-f497ab0bd781,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:07.125602 containerd[1689]: time="2024-08-05T22:13:07.125502115Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:07.125887 containerd[1689]: time="2024-08-05T22:13:07.125633218Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:07.126348 containerd[1689]: time="2024-08-05T22:13:07.126290330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:07.126348 containerd[1689]: time="2024-08-05T22:13:07.126319331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:07.129264 containerd[1689]: time="2024-08-05T22:13:07.128484372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:07.130139 containerd[1689]: time="2024-08-05T22:13:07.130092703Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:07.131159 containerd[1689]: time="2024-08-05T22:13:07.131125723Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:07.133907 containerd[1689]: time="2024-08-05T22:13:07.132860756Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:07.155084 systemd[1]: Started cri-containerd-6747b947d97e5e5ffcb8a5869ffd394589d4eb5de10dfb365dccadd3ea9fba1d.scope - libcontainer container 6747b947d97e5e5ffcb8a5869ffd394589d4eb5de10dfb365dccadd3ea9fba1d. Aug 5 22:13:07.162020 systemd[1]: Started cri-containerd-7b70257ecb4f8f0c299bcba102b6e2b07f88f6498843294323b9b1e8d9454c98.scope - libcontainer container 7b70257ecb4f8f0c299bcba102b6e2b07f88f6498843294323b9b1e8d9454c98. Aug 5 22:13:07.190597 containerd[1689]: time="2024-08-05T22:13:07.190448158Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fmswt,Uid:c13ee1e4-d454-43e9-9591-f497ab0bd781,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b70257ecb4f8f0c299bcba102b6e2b07f88f6498843294323b9b1e8d9454c98\"" Aug 5 22:13:07.194775 containerd[1689]: time="2024-08-05T22:13:07.194618338Z" level=info msg="CreateContainer within sandbox \"7b70257ecb4f8f0c299bcba102b6e2b07f88f6498843294323b9b1e8d9454c98\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:13:07.211828 containerd[1689]: time="2024-08-05T22:13:07.211771466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-8s28r,Uid:e49747eb-e9e5-4299-a842-1e0e7fb06329,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"6747b947d97e5e5ffcb8a5869ffd394589d4eb5de10dfb365dccadd3ea9fba1d\"" Aug 5 22:13:07.213355 containerd[1689]: time="2024-08-05T22:13:07.213323995Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:13:07.261664 containerd[1689]: time="2024-08-05T22:13:07.261628519Z" level=info msg="CreateContainer within sandbox \"7b70257ecb4f8f0c299bcba102b6e2b07f88f6498843294323b9b1e8d9454c98\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7257952aae26f72282b925ac0b72e0358b249f6f0d94c1db358934086de347d9\"" Aug 5 22:13:07.262248 containerd[1689]: time="2024-08-05T22:13:07.262118329Z" level=info msg="StartContainer for \"7257952aae26f72282b925ac0b72e0358b249f6f0d94c1db358934086de347d9\"" Aug 5 22:13:07.289977 systemd[1]: Started cri-containerd-7257952aae26f72282b925ac0b72e0358b249f6f0d94c1db358934086de347d9.scope - libcontainer container 7257952aae26f72282b925ac0b72e0358b249f6f0d94c1db358934086de347d9. Aug 5 22:13:07.328015 containerd[1689]: time="2024-08-05T22:13:07.327904587Z" level=info msg="StartContainer for \"7257952aae26f72282b925ac0b72e0358b249f6f0d94c1db358934086de347d9\" returns successfully" Aug 5 22:13:07.410492 kubelet[3200]: I0805 22:13:07.409805 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-fmswt" podStartSLOduration=1.409752953 podCreationTimestamp="2024-08-05 22:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:07.409241143 +0000 UTC m=+9.147676485" watchObservedRunningTime="2024-08-05 22:13:07.409752953 +0000 UTC m=+9.148188295" Aug 5 22:13:08.970252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762679295.mount: Deactivated successfully. Aug 5 22:13:09.540643 containerd[1689]: time="2024-08-05T22:13:09.540584394Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:09.542697 containerd[1689]: time="2024-08-05T22:13:09.542642532Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076088" Aug 5 22:13:09.549070 containerd[1689]: time="2024-08-05T22:13:09.549012949Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:09.554225 containerd[1689]: time="2024-08-05T22:13:09.554174344Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:09.555551 containerd[1689]: time="2024-08-05T22:13:09.554911658Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.341545361s" Aug 5 22:13:09.555551 containerd[1689]: time="2024-08-05T22:13:09.554949658Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:13:09.556902 containerd[1689]: time="2024-08-05T22:13:09.556865094Z" level=info msg="CreateContainer within sandbox \"6747b947d97e5e5ffcb8a5869ffd394589d4eb5de10dfb365dccadd3ea9fba1d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:13:09.610463 containerd[1689]: time="2024-08-05T22:13:09.610423078Z" level=info msg="CreateContainer within sandbox \"6747b947d97e5e5ffcb8a5869ffd394589d4eb5de10dfb365dccadd3ea9fba1d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"797441719338b5c2cd30d4b74fff74552580063815bd3f473a2a17c4c254a4d5\"" Aug 5 22:13:09.611541 containerd[1689]: time="2024-08-05T22:13:09.610899387Z" level=info msg="StartContainer for \"797441719338b5c2cd30d4b74fff74552580063815bd3f473a2a17c4c254a4d5\"" Aug 5 22:13:09.638938 systemd[1]: Started cri-containerd-797441719338b5c2cd30d4b74fff74552580063815bd3f473a2a17c4c254a4d5.scope - libcontainer container 797441719338b5c2cd30d4b74fff74552580063815bd3f473a2a17c4c254a4d5. Aug 5 22:13:09.670160 containerd[1689]: time="2024-08-05T22:13:09.670119775Z" level=info msg="StartContainer for \"797441719338b5c2cd30d4b74fff74552580063815bd3f473a2a17c4c254a4d5\" returns successfully" Aug 5 22:13:12.645817 kubelet[3200]: I0805 22:13:12.644849 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-8s28r" podStartSLOduration=4.302356169 podCreationTimestamp="2024-08-05 22:13:06 +0000 UTC" firstStartedPulling="2024-08-05 22:13:07.212812186 +0000 UTC m=+8.951247628" lastFinishedPulling="2024-08-05 22:13:09.555220763 +0000 UTC m=+11.293656205" observedRunningTime="2024-08-05 22:13:10.417157005 +0000 UTC m=+12.155592347" watchObservedRunningTime="2024-08-05 22:13:12.644764746 +0000 UTC m=+14.383200088" Aug 5 22:13:12.645817 kubelet[3200]: I0805 22:13:12.645246 3200 topology_manager.go:215] "Topology Admit Handler" podUID="599d535a-f399-4736-ab38-72eb84b9be0d" podNamespace="calico-system" podName="calico-typha-697d4fb46d-wbrlf" Aug 5 22:13:12.660311 systemd[1]: Created slice kubepods-besteffort-pod599d535a_f399_4736_ab38_72eb84b9be0d.slice - libcontainer container kubepods-besteffort-pod599d535a_f399_4736_ab38_72eb84b9be0d.slice. Aug 5 22:13:12.776214 kubelet[3200]: I0805 22:13:12.776153 3200 topology_manager.go:215] "Topology Admit Handler" podUID="12d0e721-3bdc-4e55-be11-945b6f8dc3ab" podNamespace="calico-system" podName="calico-node-tm4ql" Aug 5 22:13:12.786343 systemd[1]: Created slice kubepods-besteffort-pod12d0e721_3bdc_4e55_be11_945b6f8dc3ab.slice - libcontainer container kubepods-besteffort-pod12d0e721_3bdc_4e55_be11_945b6f8dc3ab.slice. Aug 5 22:13:12.833217 kubelet[3200]: I0805 22:13:12.833127 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/599d535a-f399-4736-ab38-72eb84b9be0d-typha-certs\") pod \"calico-typha-697d4fb46d-wbrlf\" (UID: \"599d535a-f399-4736-ab38-72eb84b9be0d\") " pod="calico-system/calico-typha-697d4fb46d-wbrlf" Aug 5 22:13:12.833217 kubelet[3200]: I0805 22:13:12.833180 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xb566\" (UniqueName: \"kubernetes.io/projected/599d535a-f399-4736-ab38-72eb84b9be0d-kube-api-access-xb566\") pod \"calico-typha-697d4fb46d-wbrlf\" (UID: \"599d535a-f399-4736-ab38-72eb84b9be0d\") " pod="calico-system/calico-typha-697d4fb46d-wbrlf" Aug 5 22:13:12.833491 kubelet[3200]: I0805 22:13:12.833271 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/599d535a-f399-4736-ab38-72eb84b9be0d-tigera-ca-bundle\") pod \"calico-typha-697d4fb46d-wbrlf\" (UID: \"599d535a-f399-4736-ab38-72eb84b9be0d\") " pod="calico-system/calico-typha-697d4fb46d-wbrlf" Aug 5 22:13:12.917446 kubelet[3200]: I0805 22:13:12.917318 3200 topology_manager.go:215] "Topology Admit Handler" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" podNamespace="calico-system" podName="csi-node-driver-d9ldw" Aug 5 22:13:12.917833 kubelet[3200]: E0805 22:13:12.917670 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:12.933488 kubelet[3200]: I0805 22:13:12.933453 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-node-certs\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.933949 kubelet[3200]: I0805 22:13:12.933741 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-var-lib-calico\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.933949 kubelet[3200]: I0805 22:13:12.933923 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-policysync\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.934875 kubelet[3200]: I0805 22:13:12.934711 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-lib-modules\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.934875 kubelet[3200]: I0805 22:13:12.934774 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-var-run-calico\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.934875 kubelet[3200]: I0805 22:13:12.934829 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-xtables-lock\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.936281 kubelet[3200]: I0805 22:13:12.936079 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-flexvol-driver-host\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.936281 kubelet[3200]: I0805 22:13:12.936182 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-cni-bin-dir\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.936281 kubelet[3200]: I0805 22:13:12.936219 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jn86\" (UniqueName: \"kubernetes.io/projected/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-kube-api-access-8jn86\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.936965 kubelet[3200]: I0805 22:13:12.936851 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-tigera-ca-bundle\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.936965 kubelet[3200]: I0805 22:13:12.936915 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-cni-net-dir\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.937277 kubelet[3200]: I0805 22:13:12.937165 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/12d0e721-3bdc-4e55-be11-945b6f8dc3ab-cni-log-dir\") pod \"calico-node-tm4ql\" (UID: \"12d0e721-3bdc-4e55-be11-945b6f8dc3ab\") " pod="calico-system/calico-node-tm4ql" Aug 5 22:13:12.967804 containerd[1689]: time="2024-08-05T22:13:12.964970331Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-697d4fb46d-wbrlf,Uid:599d535a-f399-4736-ab38-72eb84b9be0d,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:13.020737 containerd[1689]: time="2024-08-05T22:13:13.020579453Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:13.020737 containerd[1689]: time="2024-08-05T22:13:13.020659154Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:13.020954 containerd[1689]: time="2024-08-05T22:13:13.020772457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:13.021661 containerd[1689]: time="2024-08-05T22:13:13.020827958Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:13.040559 kubelet[3200]: I0805 22:13:13.039672 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vbcd\" (UniqueName: \"kubernetes.io/projected/9c02db63-d93a-43b1-92bc-c342f597f8fe-kube-api-access-4vbcd\") pod \"csi-node-driver-d9ldw\" (UID: \"9c02db63-d93a-43b1-92bc-c342f597f8fe\") " pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:13.040559 kubelet[3200]: I0805 22:13:13.039762 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9c02db63-d93a-43b1-92bc-c342f597f8fe-varrun\") pod \"csi-node-driver-d9ldw\" (UID: \"9c02db63-d93a-43b1-92bc-c342f597f8fe\") " pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:13.040559 kubelet[3200]: I0805 22:13:13.039881 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9c02db63-d93a-43b1-92bc-c342f597f8fe-kubelet-dir\") pod \"csi-node-driver-d9ldw\" (UID: \"9c02db63-d93a-43b1-92bc-c342f597f8fe\") " pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:13.040559 kubelet[3200]: I0805 22:13:13.039915 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9c02db63-d93a-43b1-92bc-c342f597f8fe-socket-dir\") pod \"csi-node-driver-d9ldw\" (UID: \"9c02db63-d93a-43b1-92bc-c342f597f8fe\") " pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:13.040559 kubelet[3200]: I0805 22:13:13.040015 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9c02db63-d93a-43b1-92bc-c342f597f8fe-registration-dir\") pod \"csi-node-driver-d9ldw\" (UID: \"9c02db63-d93a-43b1-92bc-c342f597f8fe\") " pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:13.045466 kubelet[3200]: E0805 22:13:13.045442 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.045968 kubelet[3200]: W0805 22:13:13.045726 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.045968 kubelet[3200]: E0805 22:13:13.045767 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.054879 kubelet[3200]: E0805 22:13:13.051424 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.054879 kubelet[3200]: W0805 22:13:13.051441 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.054879 kubelet[3200]: E0805 22:13:13.051466 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.054879 kubelet[3200]: E0805 22:13:13.054319 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.054879 kubelet[3200]: W0805 22:13:13.054330 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.054879 kubelet[3200]: E0805 22:13:13.054357 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.056418 kubelet[3200]: E0805 22:13:13.056223 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.056418 kubelet[3200]: W0805 22:13:13.056238 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.056550 kubelet[3200]: E0805 22:13:13.056445 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.058236 kubelet[3200]: E0805 22:13:13.056958 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.058236 kubelet[3200]: W0805 22:13:13.056974 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.058236 kubelet[3200]: E0805 22:13:13.057086 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.058236 kubelet[3200]: E0805 22:13:13.058177 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.058236 kubelet[3200]: W0805 22:13:13.058188 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.058236 kubelet[3200]: E0805 22:13:13.058209 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.069970 systemd[1]: Started cri-containerd-726f2b645c8b9ff8d1566e1b788a766ba72db37c790918edbffe2e6f99dc30f5.scope - libcontainer container 726f2b645c8b9ff8d1566e1b788a766ba72db37c790918edbffe2e6f99dc30f5. Aug 5 22:13:13.075127 kubelet[3200]: E0805 22:13:13.075094 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.075127 kubelet[3200]: W0805 22:13:13.075112 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.075259 kubelet[3200]: E0805 22:13:13.075194 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.093647 containerd[1689]: time="2024-08-05T22:13:13.093518794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tm4ql,Uid:12d0e721-3bdc-4e55-be11-945b6f8dc3ab,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:13.133106 containerd[1689]: time="2024-08-05T22:13:13.132765015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-697d4fb46d-wbrlf,Uid:599d535a-f399-4736-ab38-72eb84b9be0d,Namespace:calico-system,Attempt:0,} returns sandbox id \"726f2b645c8b9ff8d1566e1b788a766ba72db37c790918edbffe2e6f99dc30f5\"" Aug 5 22:13:13.134836 containerd[1689]: time="2024-08-05T22:13:13.134497947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:13:13.142356 kubelet[3200]: E0805 22:13:13.142283 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.142356 kubelet[3200]: W0805 22:13:13.142305 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.142356 kubelet[3200]: E0805 22:13:13.142332 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.143639 kubelet[3200]: E0805 22:13:13.143426 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.143639 kubelet[3200]: W0805 22:13:13.143443 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.143639 kubelet[3200]: E0805 22:13:13.143480 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.144093 kubelet[3200]: E0805 22:13:13.143772 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.144093 kubelet[3200]: W0805 22:13:13.143807 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.144093 kubelet[3200]: E0805 22:13:13.143834 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.144688 kubelet[3200]: E0805 22:13:13.144291 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.144688 kubelet[3200]: W0805 22:13:13.144308 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.144688 kubelet[3200]: E0805 22:13:13.144335 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.144883 kubelet[3200]: E0805 22:13:13.144719 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.144883 kubelet[3200]: W0805 22:13:13.144732 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.144883 kubelet[3200]: E0805 22:13:13.144755 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.145762 kubelet[3200]: E0805 22:13:13.145343 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.145762 kubelet[3200]: W0805 22:13:13.145359 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.147760 kubelet[3200]: E0805 22:13:13.146401 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.147760 kubelet[3200]: E0805 22:13:13.146521 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.147760 kubelet[3200]: W0805 22:13:13.146533 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.147760 kubelet[3200]: E0805 22:13:13.146737 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.147760 kubelet[3200]: W0805 22:13:13.146747 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.148267 kubelet[3200]: E0805 22:13:13.148140 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.148267 kubelet[3200]: E0805 22:13:13.148174 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.148407 kubelet[3200]: E0805 22:13:13.148305 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.148407 kubelet[3200]: W0805 22:13:13.148315 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.148728 kubelet[3200]: E0805 22:13:13.148618 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.149256 kubelet[3200]: E0805 22:13:13.149234 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.149256 kubelet[3200]: W0805 22:13:13.149255 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.149637 kubelet[3200]: E0805 22:13:13.149521 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.149637 kubelet[3200]: W0805 22:13:13.149535 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.149637 kubelet[3200]: E0805 22:13:13.149576 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.149637 kubelet[3200]: E0805 22:13:13.149599 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.150992 kubelet[3200]: E0805 22:13:13.150831 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.150992 kubelet[3200]: W0805 22:13:13.150846 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.151142 kubelet[3200]: E0805 22:13:13.151119 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.151340 kubelet[3200]: E0805 22:13:13.151326 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.151340 kubelet[3200]: W0805 22:13:13.151340 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.151497 kubelet[3200]: E0805 22:13:13.151476 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.151558 kubelet[3200]: E0805 22:13:13.151534 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.151558 kubelet[3200]: W0805 22:13:13.151543 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.151673 kubelet[3200]: E0805 22:13:13.151662 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.152139 kubelet[3200]: E0805 22:13:13.152050 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.152139 kubelet[3200]: W0805 22:13:13.152073 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.152139 kubelet[3200]: E0805 22:13:13.152109 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.153381 kubelet[3200]: E0805 22:13:13.152965 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.153381 kubelet[3200]: W0805 22:13:13.153052 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.153381 kubelet[3200]: E0805 22:13:13.153344 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.153381 kubelet[3200]: W0805 22:13:13.153356 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.153774 kubelet[3200]: E0805 22:13:13.153412 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.153774 kubelet[3200]: E0805 22:13:13.153445 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.153774 kubelet[3200]: E0805 22:13:13.153619 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.153774 kubelet[3200]: W0805 22:13:13.153630 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.154097 kubelet[3200]: E0805 22:13:13.153975 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.154613 kubelet[3200]: E0805 22:13:13.154447 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.154613 kubelet[3200]: W0805 22:13:13.154461 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.154613 kubelet[3200]: E0805 22:13:13.154491 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.154993 kubelet[3200]: E0805 22:13:13.154672 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.154993 kubelet[3200]: W0805 22:13:13.154682 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.154993 kubelet[3200]: E0805 22:13:13.154709 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.154993 kubelet[3200]: E0805 22:13:13.154906 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.154993 kubelet[3200]: W0805 22:13:13.154940 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.155201 kubelet[3200]: E0805 22:13:13.155174 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.155201 kubelet[3200]: W0805 22:13:13.155184 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.155201 kubelet[3200]: E0805 22:13:13.155200 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.156444 kubelet[3200]: E0805 22:13:13.155446 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.156444 kubelet[3200]: W0805 22:13:13.155458 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.156444 kubelet[3200]: E0805 22:13:13.155554 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.156444 kubelet[3200]: E0805 22:13:13.155754 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.156444 kubelet[3200]: E0805 22:13:13.155859 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.156444 kubelet[3200]: W0805 22:13:13.155869 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.156444 kubelet[3200]: E0805 22:13:13.155883 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.157232 kubelet[3200]: E0805 22:13:13.156985 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.157232 kubelet[3200]: W0805 22:13:13.157021 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.157232 kubelet[3200]: E0805 22:13:13.157041 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.170971 kubelet[3200]: E0805 22:13:13.170846 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:13.170971 kubelet[3200]: W0805 22:13:13.170861 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:13.170971 kubelet[3200]: E0805 22:13:13.170889 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:13.186957 containerd[1689]: time="2024-08-05T22:13:13.186490002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:13.187188 containerd[1689]: time="2024-08-05T22:13:13.186916310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:13.187188 containerd[1689]: time="2024-08-05T22:13:13.186947011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:13.187188 containerd[1689]: time="2024-08-05T22:13:13.186968311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:13.212190 systemd[1]: Started cri-containerd-1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3.scope - libcontainer container 1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3. Aug 5 22:13:13.245269 containerd[1689]: time="2024-08-05T22:13:13.245029378Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tm4ql,Uid:12d0e721-3bdc-4e55-be11-945b6f8dc3ab,Namespace:calico-system,Attempt:0,} returns sandbox id \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\"" Aug 5 22:13:13.957731 systemd[1]: run-containerd-runc-k8s.io-726f2b645c8b9ff8d1566e1b788a766ba72db37c790918edbffe2e6f99dc30f5-runc.AqcLCM.mount: Deactivated successfully. Aug 5 22:13:14.350990 kubelet[3200]: E0805 22:13:14.350345 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:15.696935 containerd[1689]: time="2024-08-05T22:13:15.696883241Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:15.713077 containerd[1689]: time="2024-08-05T22:13:15.713010537Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:13:15.819346 containerd[1689]: time="2024-08-05T22:13:15.819263590Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:15.824375 containerd[1689]: time="2024-08-05T22:13:15.824306083Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:15.825112 containerd[1689]: time="2024-08-05T22:13:15.824951994Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.690408347s" Aug 5 22:13:15.825112 containerd[1689]: time="2024-08-05T22:13:15.824991095Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:13:15.826437 containerd[1689]: time="2024-08-05T22:13:15.825980913Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:13:15.844848 containerd[1689]: time="2024-08-05T22:13:15.844818460Z" level=info msg="CreateContainer within sandbox \"726f2b645c8b9ff8d1566e1b788a766ba72db37c790918edbffe2e6f99dc30f5\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:13:15.891102 containerd[1689]: time="2024-08-05T22:13:15.890942707Z" level=info msg="CreateContainer within sandbox \"726f2b645c8b9ff8d1566e1b788a766ba72db37c790918edbffe2e6f99dc30f5\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"4ea70c4d5d68f9c35b69bd81b3116bb020d89162f7e6021c7d79ba48c0cbcf52\"" Aug 5 22:13:15.891879 containerd[1689]: time="2024-08-05T22:13:15.891830024Z" level=info msg="StartContainer for \"4ea70c4d5d68f9c35b69bd81b3116bb020d89162f7e6021c7d79ba48c0cbcf52\"" Aug 5 22:13:15.925955 systemd[1]: Started cri-containerd-4ea70c4d5d68f9c35b69bd81b3116bb020d89162f7e6021c7d79ba48c0cbcf52.scope - libcontainer container 4ea70c4d5d68f9c35b69bd81b3116bb020d89162f7e6021c7d79ba48c0cbcf52. Aug 5 22:13:15.975649 containerd[1689]: time="2024-08-05T22:13:15.975453260Z" level=info msg="StartContainer for \"4ea70c4d5d68f9c35b69bd81b3116bb020d89162f7e6021c7d79ba48c0cbcf52\" returns successfully" Aug 5 22:13:16.350821 kubelet[3200]: E0805 22:13:16.350397 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:16.439689 kubelet[3200]: I0805 22:13:16.439089 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-697d4fb46d-wbrlf" podStartSLOduration=1.747735818 podCreationTimestamp="2024-08-05 22:13:12 +0000 UTC" firstStartedPulling="2024-08-05 22:13:13.134225842 +0000 UTC m=+14.872661184" lastFinishedPulling="2024-08-05 22:13:15.825521905 +0000 UTC m=+17.563957247" observedRunningTime="2024-08-05 22:13:16.436962843 +0000 UTC m=+18.175398185" watchObservedRunningTime="2024-08-05 22:13:16.439031881 +0000 UTC m=+18.177467323" Aug 5 22:13:16.472741 kubelet[3200]: E0805 22:13:16.472718 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.473817 kubelet[3200]: W0805 22:13:16.473424 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.473817 kubelet[3200]: E0805 22:13:16.473465 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.474225 kubelet[3200]: E0805 22:13:16.474210 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.474326 kubelet[3200]: W0805 22:13:16.474314 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.474406 kubelet[3200]: E0805 22:13:16.474397 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.475504 kubelet[3200]: E0805 22:13:16.475474 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.475676 kubelet[3200]: W0805 22:13:16.475577 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.475676 kubelet[3200]: E0805 22:13:16.475597 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.476128 kubelet[3200]: E0805 22:13:16.475938 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.476128 kubelet[3200]: W0805 22:13:16.475956 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.476128 kubelet[3200]: E0805 22:13:16.475978 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.476507 kubelet[3200]: E0805 22:13:16.476354 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.476507 kubelet[3200]: W0805 22:13:16.476383 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.476507 kubelet[3200]: E0805 22:13:16.476400 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.476858 kubelet[3200]: E0805 22:13:16.476826 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.477028 kubelet[3200]: W0805 22:13:16.476841 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.477028 kubelet[3200]: E0805 22:13:16.476957 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.477808 kubelet[3200]: E0805 22:13:16.477402 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.477808 kubelet[3200]: W0805 22:13:16.477416 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.477808 kubelet[3200]: E0805 22:13:16.477434 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.478351 kubelet[3200]: E0805 22:13:16.478185 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.478351 kubelet[3200]: W0805 22:13:16.478198 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.478351 kubelet[3200]: E0805 22:13:16.478228 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.478853 kubelet[3200]: E0805 22:13:16.478583 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.478853 kubelet[3200]: W0805 22:13:16.478598 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.478853 kubelet[3200]: E0805 22:13:16.478613 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.480138 kubelet[3200]: E0805 22:13:16.480125 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.480138 kubelet[3200]: W0805 22:13:16.480163 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.480138 kubelet[3200]: E0805 22:13:16.480180 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.480645 kubelet[3200]: E0805 22:13:16.480557 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.480645 kubelet[3200]: W0805 22:13:16.480570 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.480645 kubelet[3200]: E0805 22:13:16.480586 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.481102 kubelet[3200]: E0805 22:13:16.480968 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.481102 kubelet[3200]: W0805 22:13:16.480980 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.481102 kubelet[3200]: E0805 22:13:16.481003 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.482123 kubelet[3200]: E0805 22:13:16.481988 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.482123 kubelet[3200]: W0805 22:13:16.482002 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.482123 kubelet[3200]: E0805 22:13:16.482019 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.483473 kubelet[3200]: E0805 22:13:16.482318 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.483473 kubelet[3200]: W0805 22:13:16.482328 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.483473 kubelet[3200]: E0805 22:13:16.482345 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.483473 kubelet[3200]: E0805 22:13:16.482605 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.483473 kubelet[3200]: W0805 22:13:16.482615 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.483473 kubelet[3200]: E0805 22:13:16.482638 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.573775 kubelet[3200]: E0805 22:13:16.573734 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.573775 kubelet[3200]: W0805 22:13:16.573768 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.574247 kubelet[3200]: E0805 22:13:16.573818 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.574247 kubelet[3200]: E0805 22:13:16.574161 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.574247 kubelet[3200]: W0805 22:13:16.574179 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.574247 kubelet[3200]: E0805 22:13:16.574210 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.574610 kubelet[3200]: E0805 22:13:16.574587 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.574610 kubelet[3200]: W0805 22:13:16.574605 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.574754 kubelet[3200]: E0805 22:13:16.574641 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.575027 kubelet[3200]: E0805 22:13:16.575006 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.575027 kubelet[3200]: W0805 22:13:16.575023 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.575198 kubelet[3200]: E0805 22:13:16.575089 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.575383 kubelet[3200]: E0805 22:13:16.575364 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.575383 kubelet[3200]: W0805 22:13:16.575379 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.575621 kubelet[3200]: E0805 22:13:16.575407 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.575679 kubelet[3200]: E0805 22:13:16.575666 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.575735 kubelet[3200]: W0805 22:13:16.575679 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.575735 kubelet[3200]: E0805 22:13:16.575717 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.576075 kubelet[3200]: E0805 22:13:16.576047 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.576075 kubelet[3200]: W0805 22:13:16.576071 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.576274 kubelet[3200]: E0805 22:13:16.576171 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.576659 kubelet[3200]: E0805 22:13:16.576631 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.576756 kubelet[3200]: W0805 22:13:16.576669 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.576959 kubelet[3200]: E0805 22:13:16.576831 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.577030 kubelet[3200]: E0805 22:13:16.576961 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.577030 kubelet[3200]: W0805 22:13:16.576973 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.577146 kubelet[3200]: E0805 22:13:16.577105 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.577358 kubelet[3200]: E0805 22:13:16.577325 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.577358 kubelet[3200]: W0805 22:13:16.577341 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.577517 kubelet[3200]: E0805 22:13:16.577368 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.577642 kubelet[3200]: E0805 22:13:16.577619 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.577642 kubelet[3200]: W0805 22:13:16.577635 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.577805 kubelet[3200]: E0805 22:13:16.577671 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.577963 kubelet[3200]: E0805 22:13:16.577945 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.577963 kubelet[3200]: W0805 22:13:16.577960 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.578089 kubelet[3200]: E0805 22:13:16.577996 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.578307 kubelet[3200]: E0805 22:13:16.578287 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.578307 kubelet[3200]: W0805 22:13:16.578303 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.578447 kubelet[3200]: E0805 22:13:16.578329 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.578749 kubelet[3200]: E0805 22:13:16.578729 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.578749 kubelet[3200]: W0805 22:13:16.578744 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.579222 kubelet[3200]: E0805 22:13:16.578866 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.579222 kubelet[3200]: E0805 22:13:16.579047 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.579222 kubelet[3200]: W0805 22:13:16.579057 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.579222 kubelet[3200]: E0805 22:13:16.579072 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.579444 kubelet[3200]: E0805 22:13:16.579335 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.579444 kubelet[3200]: W0805 22:13:16.579347 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.579444 kubelet[3200]: E0805 22:13:16.579367 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.579680 kubelet[3200]: E0805 22:13:16.579662 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.579759 kubelet[3200]: W0805 22:13:16.579725 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.579759 kubelet[3200]: E0805 22:13:16.579748 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:16.580373 kubelet[3200]: E0805 22:13:16.580352 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:16.580373 kubelet[3200]: W0805 22:13:16.580368 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:16.580498 kubelet[3200]: E0805 22:13:16.580388 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.487938 containerd[1689]: time="2024-08-05T22:13:17.487881336Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:17.490219 containerd[1689]: time="2024-08-05T22:13:17.490140979Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:13:17.491887 kubelet[3200]: E0805 22:13:17.491863 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.492431 kubelet[3200]: W0805 22:13:17.492280 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.492431 kubelet[3200]: E0805 22:13:17.492318 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.492678 kubelet[3200]: E0805 22:13:17.492602 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.492678 kubelet[3200]: W0805 22:13:17.492616 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.492678 kubelet[3200]: E0805 22:13:17.492635 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.493187 kubelet[3200]: E0805 22:13:17.493095 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.493187 kubelet[3200]: W0805 22:13:17.493110 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.493187 kubelet[3200]: E0805 22:13:17.493127 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.493352 kubelet[3200]: E0805 22:13:17.493343 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.493394 kubelet[3200]: W0805 22:13:17.493353 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.493394 kubelet[3200]: E0805 22:13:17.493368 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.493879 kubelet[3200]: E0805 22:13:17.493574 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.493879 kubelet[3200]: W0805 22:13:17.493588 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.493879 kubelet[3200]: E0805 22:13:17.493604 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.493879 kubelet[3200]: E0805 22:13:17.493816 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.493879 kubelet[3200]: W0805 22:13:17.493829 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.493879 kubelet[3200]: E0805 22:13:17.493845 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.494165 kubelet[3200]: E0805 22:13:17.494039 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.494165 kubelet[3200]: W0805 22:13:17.494048 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.494165 kubelet[3200]: E0805 22:13:17.494067 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.494342 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496199 kubelet[3200]: W0805 22:13:17.494354 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.494373 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.494586 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496199 kubelet[3200]: W0805 22:13:17.494597 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.494613 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.494835 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496199 kubelet[3200]: W0805 22:13:17.494846 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.494861 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496199 kubelet[3200]: E0805 22:13:17.495100 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496582 containerd[1689]: time="2024-08-05T22:13:17.495594481Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:17.496640 kubelet[3200]: W0805 22:13:17.495113 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.496640 kubelet[3200]: E0805 22:13:17.495131 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496640 kubelet[3200]: E0805 22:13:17.495316 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496640 kubelet[3200]: W0805 22:13:17.495326 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.496640 kubelet[3200]: E0805 22:13:17.495342 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496640 kubelet[3200]: E0805 22:13:17.495526 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496640 kubelet[3200]: W0805 22:13:17.495538 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.496640 kubelet[3200]: E0805 22:13:17.495553 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.496640 kubelet[3200]: E0805 22:13:17.495746 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.496640 kubelet[3200]: W0805 22:13:17.495756 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.497094 kubelet[3200]: E0805 22:13:17.495774 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.497094 kubelet[3200]: E0805 22:13:17.495995 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.497094 kubelet[3200]: W0805 22:13:17.496007 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.497094 kubelet[3200]: E0805 22:13:17.496023 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.502267 containerd[1689]: time="2024-08-05T22:13:17.502212805Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:17.502994 containerd[1689]: time="2024-08-05T22:13:17.502857617Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.676831403s" Aug 5 22:13:17.502994 containerd[1689]: time="2024-08-05T22:13:17.502901218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:13:17.505109 containerd[1689]: time="2024-08-05T22:13:17.504903455Z" level=info msg="CreateContainer within sandbox \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:13:17.545182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount296797589.mount: Deactivated successfully. Aug 5 22:13:17.550828 containerd[1689]: time="2024-08-05T22:13:17.550775615Z" level=info msg="CreateContainer within sandbox \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb\"" Aug 5 22:13:17.552491 containerd[1689]: time="2024-08-05T22:13:17.551386426Z" level=info msg="StartContainer for \"9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb\"" Aug 5 22:13:17.581039 kubelet[3200]: E0805 22:13:17.581017 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.581212 kubelet[3200]: W0805 22:13:17.581181 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.581341 kubelet[3200]: E0805 22:13:17.581330 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.581916 kubelet[3200]: E0805 22:13:17.581880 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.582636 kubelet[3200]: W0805 22:13:17.582567 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.582814 kubelet[3200]: E0805 22:13:17.582617 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.584681 kubelet[3200]: E0805 22:13:17.584537 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.584681 kubelet[3200]: W0805 22:13:17.584553 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.584874 kubelet[3200]: E0805 22:13:17.584808 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.585942 kubelet[3200]: E0805 22:13:17.585688 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.585942 kubelet[3200]: W0805 22:13:17.585705 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.585942 kubelet[3200]: E0805 22:13:17.585725 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.586338 kubelet[3200]: E0805 22:13:17.586030 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.586338 kubelet[3200]: W0805 22:13:17.586042 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.586338 kubelet[3200]: E0805 22:13:17.586059 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.587286 kubelet[3200]: E0805 22:13:17.587040 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.587286 kubelet[3200]: W0805 22:13:17.587055 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.587286 kubelet[3200]: E0805 22:13:17.587074 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.588584 kubelet[3200]: E0805 22:13:17.588546 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.588584 kubelet[3200]: W0805 22:13:17.588569 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.588584 kubelet[3200]: E0805 22:13:17.588586 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.589972 systemd[1]: Started cri-containerd-9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb.scope - libcontainer container 9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb. Aug 5 22:13:17.592419 kubelet[3200]: E0805 22:13:17.592398 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.592419 kubelet[3200]: W0805 22:13:17.592419 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.593339 kubelet[3200]: E0805 22:13:17.593319 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.594092 kubelet[3200]: E0805 22:13:17.594072 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.594092 kubelet[3200]: W0805 22:13:17.594089 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.597366 kubelet[3200]: E0805 22:13:17.597324 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.599675 kubelet[3200]: E0805 22:13:17.599657 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.599759 kubelet[3200]: W0805 22:13:17.599727 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.600838 kubelet[3200]: E0805 22:13:17.600824 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.602059 kubelet[3200]: E0805 22:13:17.601015 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.602059 kubelet[3200]: W0805 22:13:17.601041 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.602893 kubelet[3200]: E0805 22:13:17.602865 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.602893 kubelet[3200]: W0805 22:13:17.602892 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.604450 kubelet[3200]: E0805 22:13:17.604430 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.604583 kubelet[3200]: E0805 22:13:17.604571 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.605942 kubelet[3200]: E0805 22:13:17.605926 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.606175 kubelet[3200]: W0805 22:13:17.606155 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.607681 kubelet[3200]: E0805 22:13:17.607461 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.607681 kubelet[3200]: E0805 22:13:17.607599 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.607681 kubelet[3200]: W0805 22:13:17.607617 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.607681 kubelet[3200]: E0805 22:13:17.607638 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.608902 kubelet[3200]: E0805 22:13:17.608882 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.608902 kubelet[3200]: W0805 22:13:17.608901 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.609026 kubelet[3200]: E0805 22:13:17.608925 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.609777 kubelet[3200]: E0805 22:13:17.609302 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.609777 kubelet[3200]: W0805 22:13:17.609316 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.609777 kubelet[3200]: E0805 22:13:17.609341 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.610668 kubelet[3200]: E0805 22:13:17.609926 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.610743 kubelet[3200]: W0805 22:13:17.610672 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.610743 kubelet[3200]: E0805 22:13:17.610701 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.610964 kubelet[3200]: E0805 22:13:17.610950 3200 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:13:17.610964 kubelet[3200]: W0805 22:13:17.610964 3200 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:13:17.611076 kubelet[3200]: E0805 22:13:17.610980 3200 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:13:17.646185 containerd[1689]: time="2024-08-05T22:13:17.646145002Z" level=info msg="StartContainer for \"9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb\" returns successfully" Aug 5 22:13:17.667990 systemd[1]: cri-containerd-9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb.scope: Deactivated successfully. Aug 5 22:13:17.832549 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb-rootfs.mount: Deactivated successfully. Aug 5 22:13:18.350715 kubelet[3200]: E0805 22:13:18.350326 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:20.350699 kubelet[3200]: E0805 22:13:20.350290 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:20.877899 containerd[1689]: time="2024-08-05T22:13:20.877834858Z" level=info msg="shim disconnected" id=9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb namespace=k8s.io Aug 5 22:13:20.877899 containerd[1689]: time="2024-08-05T22:13:20.877898260Z" level=warning msg="cleaning up after shim disconnected" id=9693f7d77f5e2d7a173c3d52c05759798d4d4255ead08fe25d0de05c701b0adb namespace=k8s.io Aug 5 22:13:20.877899 containerd[1689]: time="2024-08-05T22:13:20.877911160Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:13:21.444034 containerd[1689]: time="2024-08-05T22:13:21.443403856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:13:22.351417 kubelet[3200]: E0805 22:13:22.350999 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:24.350843 kubelet[3200]: E0805 22:13:24.350657 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:26.352547 kubelet[3200]: E0805 22:13:26.351837 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:27.691335 containerd[1689]: time="2024-08-05T22:13:27.691284826Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:27.697205 containerd[1689]: time="2024-08-05T22:13:27.697151434Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:13:27.712655 containerd[1689]: time="2024-08-05T22:13:27.712589217Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:27.719241 containerd[1689]: time="2024-08-05T22:13:27.719179738Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:27.720056 containerd[1689]: time="2024-08-05T22:13:27.719935951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 6.276482894s" Aug 5 22:13:27.720056 containerd[1689]: time="2024-08-05T22:13:27.719971952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:13:27.722450 containerd[1689]: time="2024-08-05T22:13:27.722422597Z" level=info msg="CreateContainer within sandbox \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:13:27.771959 containerd[1689]: time="2024-08-05T22:13:27.771925404Z" level=info msg="CreateContainer within sandbox \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16\"" Aug 5 22:13:27.773803 containerd[1689]: time="2024-08-05T22:13:27.772346612Z" level=info msg="StartContainer for \"1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16\"" Aug 5 22:13:27.804595 systemd[1]: run-containerd-runc-k8s.io-1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16-runc.IopHz7.mount: Deactivated successfully. Aug 5 22:13:27.813935 systemd[1]: Started cri-containerd-1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16.scope - libcontainer container 1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16. Aug 5 22:13:27.870156 containerd[1689]: time="2024-08-05T22:13:27.870075303Z" level=info msg="StartContainer for \"1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16\" returns successfully" Aug 5 22:13:28.351945 kubelet[3200]: E0805 22:13:28.351384 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:30.352440 kubelet[3200]: E0805 22:13:30.350765 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:31.049345 systemd[1]: cri-containerd-1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16.scope: Deactivated successfully. Aug 5 22:13:31.073772 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16-rootfs.mount: Deactivated successfully. Aug 5 22:13:31.089001 kubelet[3200]: I0805 22:13:31.088970 3200 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:13:32.219873 kubelet[3200]: I0805 22:13:31.109803 3200 topology_manager.go:215] "Topology Admit Handler" podUID="ce9f3ff0-2240-4466-ac86-fd158dab8531" podNamespace="kube-system" podName="coredns-5dd5756b68-q5v27" Aug 5 22:13:32.219873 kubelet[3200]: I0805 22:13:31.120974 3200 topology_manager.go:215] "Topology Admit Handler" podUID="a980bbae-20fa-4436-b32d-bc71467778b8" podNamespace="kube-system" podName="coredns-5dd5756b68-jrgt4" Aug 5 22:13:32.219873 kubelet[3200]: I0805 22:13:31.122723 3200 topology_manager.go:215] "Topology Admit Handler" podUID="fca8028c-8cf8-428d-8837-54675ae4c7c7" podNamespace="calico-system" podName="calico-kube-controllers-7959968d79-6gn4k" Aug 5 22:13:32.219873 kubelet[3200]: I0805 22:13:31.277979 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a980bbae-20fa-4436-b32d-bc71467778b8-config-volume\") pod \"coredns-5dd5756b68-jrgt4\" (UID: \"a980bbae-20fa-4436-b32d-bc71467778b8\") " pod="kube-system/coredns-5dd5756b68-jrgt4" Aug 5 22:13:32.219873 kubelet[3200]: I0805 22:13:31.278083 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cv9hk\" (UniqueName: \"kubernetes.io/projected/a980bbae-20fa-4436-b32d-bc71467778b8-kube-api-access-cv9hk\") pod \"coredns-5dd5756b68-jrgt4\" (UID: \"a980bbae-20fa-4436-b32d-bc71467778b8\") " pod="kube-system/coredns-5dd5756b68-jrgt4" Aug 5 22:13:32.219873 kubelet[3200]: I0805 22:13:31.278153 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9jl2q\" (UniqueName: \"kubernetes.io/projected/fca8028c-8cf8-428d-8837-54675ae4c7c7-kube-api-access-9jl2q\") pod \"calico-kube-controllers-7959968d79-6gn4k\" (UID: \"fca8028c-8cf8-428d-8837-54675ae4c7c7\") " pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" Aug 5 22:13:31.123585 systemd[1]: Created slice kubepods-burstable-podce9f3ff0_2240_4466_ac86_fd158dab8531.slice - libcontainer container kubepods-burstable-podce9f3ff0_2240_4466_ac86_fd158dab8531.slice. Aug 5 22:13:32.221007 kubelet[3200]: I0805 22:13:31.278197 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce9f3ff0-2240-4466-ac86-fd158dab8531-config-volume\") pod \"coredns-5dd5756b68-q5v27\" (UID: \"ce9f3ff0-2240-4466-ac86-fd158dab8531\") " pod="kube-system/coredns-5dd5756b68-q5v27" Aug 5 22:13:32.221007 kubelet[3200]: I0805 22:13:31.278414 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fca8028c-8cf8-428d-8837-54675ae4c7c7-tigera-ca-bundle\") pod \"calico-kube-controllers-7959968d79-6gn4k\" (UID: \"fca8028c-8cf8-428d-8837-54675ae4c7c7\") " pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" Aug 5 22:13:32.221007 kubelet[3200]: I0805 22:13:31.278455 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlxq\" (UniqueName: \"kubernetes.io/projected/ce9f3ff0-2240-4466-ac86-fd158dab8531-kube-api-access-bmlxq\") pod \"coredns-5dd5756b68-q5v27\" (UID: \"ce9f3ff0-2240-4466-ac86-fd158dab8531\") " pod="kube-system/coredns-5dd5756b68-q5v27" Aug 5 22:13:31.138043 systemd[1]: Created slice kubepods-burstable-poda980bbae_20fa_4436_b32d_bc71467778b8.slice - libcontainer container kubepods-burstable-poda980bbae_20fa_4436_b32d_bc71467778b8.slice. Aug 5 22:13:31.143464 systemd[1]: Created slice kubepods-besteffort-podfca8028c_8cf8_428d_8837_54675ae4c7c7.slice - libcontainer container kubepods-besteffort-podfca8028c_8cf8_428d_8837_54675ae4c7c7.slice. Aug 5 22:13:32.358070 systemd[1]: Created slice kubepods-besteffort-pod9c02db63_d93a_43b1_92bc_c342f597f8fe.slice - libcontainer container kubepods-besteffort-pod9c02db63_d93a_43b1_92bc_c342f597f8fe.slice. Aug 5 22:13:32.360555 containerd[1689]: time="2024-08-05T22:13:32.360516003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9ldw,Uid:9c02db63-d93a-43b1-92bc-c342f597f8fe,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:32.524630 containerd[1689]: time="2024-08-05T22:13:32.524469408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q5v27,Uid:ce9f3ff0-2240-4466-ac86-fd158dab8531,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:32.528281 containerd[1689]: time="2024-08-05T22:13:32.528168676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jrgt4,Uid:a980bbae-20fa-4436-b32d-bc71467778b8,Namespace:kube-system,Attempt:0,}" Aug 5 22:13:32.528281 containerd[1689]: time="2024-08-05T22:13:32.528171076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959968d79-6gn4k,Uid:fca8028c-8cf8-428d-8837-54675ae4c7c7,Namespace:calico-system,Attempt:0,}" Aug 5 22:13:35.184364 containerd[1689]: time="2024-08-05T22:13:35.184269333Z" level=info msg="shim disconnected" id=1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16 namespace=k8s.io Aug 5 22:13:35.184364 containerd[1689]: time="2024-08-05T22:13:35.184345335Z" level=warning msg="cleaning up after shim disconnected" id=1bb363907c7f9b4e5038987e5b08106c08c2ca77b3550d8086cdfbdfec701c16 namespace=k8s.io Aug 5 22:13:35.184364 containerd[1689]: time="2024-08-05T22:13:35.184361335Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:13:35.450960 containerd[1689]: time="2024-08-05T22:13:35.450679176Z" level=error msg="Failed to destroy network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.452283 containerd[1689]: time="2024-08-05T22:13:35.452140403Z" level=error msg="encountered an error cleaning up failed sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.452283 containerd[1689]: time="2024-08-05T22:13:35.452226505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q5v27,Uid:ce9f3ff0-2240-4466-ac86-fd158dab8531,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.454191 kubelet[3200]: E0805 22:13:35.453968 3200 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.454191 kubelet[3200]: E0805 22:13:35.454052 3200 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-q5v27" Aug 5 22:13:35.454191 kubelet[3200]: E0805 22:13:35.454085 3200 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-q5v27" Aug 5 22:13:35.454657 kubelet[3200]: E0805 22:13:35.454165 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-q5v27_kube-system(ce9f3ff0-2240-4466-ac86-fd158dab8531)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-q5v27_kube-system(ce9f3ff0-2240-4466-ac86-fd158dab8531)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-q5v27" podUID="ce9f3ff0-2240-4466-ac86-fd158dab8531" Aug 5 22:13:35.455597 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d-shm.mount: Deactivated successfully. Aug 5 22:13:35.457261 containerd[1689]: time="2024-08-05T22:13:35.455925272Z" level=error msg="Failed to destroy network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.457261 containerd[1689]: time="2024-08-05T22:13:35.457030992Z" level=error msg="encountered an error cleaning up failed sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.457261 containerd[1689]: time="2024-08-05T22:13:35.457100193Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9ldw,Uid:9c02db63-d93a-43b1-92bc-c342f597f8fe,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.457416 kubelet[3200]: E0805 22:13:35.457304 3200 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.457416 kubelet[3200]: E0805 22:13:35.457356 3200 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:35.457416 kubelet[3200]: E0805 22:13:35.457385 3200 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d9ldw" Aug 5 22:13:35.457551 kubelet[3200]: E0805 22:13:35.457449 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d9ldw_calico-system(9c02db63-d93a-43b1-92bc-c342f597f8fe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d9ldw_calico-system(9c02db63-d93a-43b1-92bc-c342f597f8fe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:35.460273 containerd[1689]: time="2024-08-05T22:13:35.460003346Z" level=error msg="Failed to destroy network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.460353 containerd[1689]: time="2024-08-05T22:13:35.460311552Z" level=error msg="encountered an error cleaning up failed sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.460404 containerd[1689]: time="2024-08-05T22:13:35.460368553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jrgt4,Uid:a980bbae-20fa-4436-b32d-bc71467778b8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.462318 kubelet[3200]: E0805 22:13:35.462059 3200 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.462318 kubelet[3200]: E0805 22:13:35.462109 3200 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jrgt4" Aug 5 22:13:35.462318 kubelet[3200]: E0805 22:13:35.462134 3200 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-jrgt4" Aug 5 22:13:35.462492 kubelet[3200]: E0805 22:13:35.462188 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-jrgt4_kube-system(a980bbae-20fa-4436-b32d-bc71467778b8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-jrgt4_kube-system(a980bbae-20fa-4436-b32d-bc71467778b8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jrgt4" podUID="a980bbae-20fa-4436-b32d-bc71467778b8" Aug 5 22:13:35.463944 containerd[1689]: time="2024-08-05T22:13:35.463912617Z" level=error msg="Failed to destroy network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.464221 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3-shm.mount: Deactivated successfully. Aug 5 22:13:35.464633 containerd[1689]: time="2024-08-05T22:13:35.464524828Z" level=error msg="encountered an error cleaning up failed sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.464816 containerd[1689]: time="2024-08-05T22:13:35.464748132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959968d79-6gn4k,Uid:fca8028c-8cf8-428d-8837-54675ae4c7c7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.465147 kubelet[3200]: E0805 22:13:35.465103 3200 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.465327 kubelet[3200]: E0805 22:13:35.465150 3200 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" Aug 5 22:13:35.465327 kubelet[3200]: E0805 22:13:35.465174 3200 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" Aug 5 22:13:35.465327 kubelet[3200]: E0805 22:13:35.465223 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7959968d79-6gn4k_calico-system(fca8028c-8cf8-428d-8837-54675ae4c7c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7959968d79-6gn4k_calico-system(fca8028c-8cf8-428d-8837-54675ae4c7c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" podUID="fca8028c-8cf8-428d-8837-54675ae4c7c7" Aug 5 22:13:35.473348 containerd[1689]: time="2024-08-05T22:13:35.473070083Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:13:35.473590 kubelet[3200]: I0805 22:13:35.473572 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:35.474730 containerd[1689]: time="2024-08-05T22:13:35.474704813Z" level=info msg="StopPodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\"" Aug 5 22:13:35.475696 containerd[1689]: time="2024-08-05T22:13:35.475631430Z" level=info msg="Ensure that sandbox db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce in task-service has been cleanup successfully" Aug 5 22:13:35.477849 kubelet[3200]: I0805 22:13:35.477772 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:35.481708 containerd[1689]: time="2024-08-05T22:13:35.481669940Z" level=info msg="StopPodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\"" Aug 5 22:13:35.482937 containerd[1689]: time="2024-08-05T22:13:35.482809761Z" level=info msg="Ensure that sandbox 69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d in task-service has been cleanup successfully" Aug 5 22:13:35.483980 kubelet[3200]: I0805 22:13:35.483912 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:35.484645 containerd[1689]: time="2024-08-05T22:13:35.484564392Z" level=info msg="StopPodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\"" Aug 5 22:13:35.485059 containerd[1689]: time="2024-08-05T22:13:35.484946599Z" level=info msg="Ensure that sandbox 6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6 in task-service has been cleanup successfully" Aug 5 22:13:35.486183 kubelet[3200]: I0805 22:13:35.486170 3200 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:35.487098 containerd[1689]: time="2024-08-05T22:13:35.486849734Z" level=info msg="StopPodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\"" Aug 5 22:13:35.487270 containerd[1689]: time="2024-08-05T22:13:35.487237441Z" level=info msg="Ensure that sandbox be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3 in task-service has been cleanup successfully" Aug 5 22:13:35.569544 containerd[1689]: time="2024-08-05T22:13:35.569468336Z" level=error msg="StopPodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" failed" error="failed to destroy network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.569974 kubelet[3200]: E0805 22:13:35.569850 3200 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:35.569974 kubelet[3200]: E0805 22:13:35.569942 3200 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce"} Aug 5 22:13:35.570146 kubelet[3200]: E0805 22:13:35.569997 3200 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"fca8028c-8cf8-428d-8837-54675ae4c7c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:35.570146 kubelet[3200]: E0805 22:13:35.570037 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"fca8028c-8cf8-428d-8837-54675ae4c7c7\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" podUID="fca8028c-8cf8-428d-8837-54675ae4c7c7" Aug 5 22:13:35.585874 containerd[1689]: time="2024-08-05T22:13:35.585595229Z" level=error msg="StopPodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" failed" error="failed to destroy network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.585874 containerd[1689]: time="2024-08-05T22:13:35.585816033Z" level=error msg="StopPodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" failed" error="failed to destroy network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.586559 kubelet[3200]: E0805 22:13:35.586157 3200 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:35.586559 kubelet[3200]: E0805 22:13:35.586443 3200 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3"} Aug 5 22:13:35.586559 kubelet[3200]: E0805 22:13:35.586465 3200 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:35.586559 kubelet[3200]: E0805 22:13:35.586488 3200 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6"} Aug 5 22:13:35.586559 kubelet[3200]: E0805 22:13:35.586530 3200 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"a980bbae-20fa-4436-b32d-bc71467778b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:35.586915 kubelet[3200]: E0805 22:13:35.586579 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"a980bbae-20fa-4436-b32d-bc71467778b8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-jrgt4" podUID="a980bbae-20fa-4436-b32d-bc71467778b8" Aug 5 22:13:35.586915 kubelet[3200]: E0805 22:13:35.586530 3200 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"9c02db63-d93a-43b1-92bc-c342f597f8fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:35.586915 kubelet[3200]: E0805 22:13:35.586620 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"9c02db63-d93a-43b1-92bc-c342f597f8fe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d9ldw" podUID="9c02db63-d93a-43b1-92bc-c342f597f8fe" Aug 5 22:13:35.588217 containerd[1689]: time="2024-08-05T22:13:35.588174876Z" level=error msg="StopPodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" failed" error="failed to destroy network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:13:35.588406 kubelet[3200]: E0805 22:13:35.588379 3200 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:35.588483 kubelet[3200]: E0805 22:13:35.588427 3200 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d"} Aug 5 22:13:35.588533 kubelet[3200]: E0805 22:13:35.588498 3200 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"ce9f3ff0-2240-4466-ac86-fd158dab8531\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:13:35.588606 kubelet[3200]: E0805 22:13:35.588539 3200 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"ce9f3ff0-2240-4466-ac86-fd158dab8531\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-q5v27" podUID="ce9f3ff0-2240-4466-ac86-fd158dab8531" Aug 5 22:13:36.284336 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce-shm.mount: Deactivated successfully. Aug 5 22:13:36.284457 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6-shm.mount: Deactivated successfully. Aug 5 22:13:44.143760 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2527724718.mount: Deactivated successfully. Aug 5 22:13:44.254527 containerd[1689]: time="2024-08-05T22:13:44.254433402Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:44.259521 containerd[1689]: time="2024-08-05T22:13:44.259450193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:13:44.263279 containerd[1689]: time="2024-08-05T22:13:44.263025859Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:44.267217 containerd[1689]: time="2024-08-05T22:13:44.267141534Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:44.267892 containerd[1689]: time="2024-08-05T22:13:44.267802546Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 8.794670661s" Aug 5 22:13:44.267892 containerd[1689]: time="2024-08-05T22:13:44.267844946Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:13:44.286546 containerd[1689]: time="2024-08-05T22:13:44.284015441Z" level=info msg="CreateContainer within sandbox \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:13:44.342682 containerd[1689]: time="2024-08-05T22:13:44.342637810Z" level=info msg="CreateContainer within sandbox \"1d7f753d7907def3f9b10910fea541276051686f732a447d59936d3f69ef77b3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"92ffc66b4b5de9f52200640d85ab0e18da4e0d2c1abf33bc0e846736983de0cd\"" Aug 5 22:13:44.343361 containerd[1689]: time="2024-08-05T22:13:44.343308022Z" level=info msg="StartContainer for \"92ffc66b4b5de9f52200640d85ab0e18da4e0d2c1abf33bc0e846736983de0cd\"" Aug 5 22:13:44.378961 systemd[1]: Started cri-containerd-92ffc66b4b5de9f52200640d85ab0e18da4e0d2c1abf33bc0e846736983de0cd.scope - libcontainer container 92ffc66b4b5de9f52200640d85ab0e18da4e0d2c1abf33bc0e846736983de0cd. Aug 5 22:13:44.422331 containerd[1689]: time="2024-08-05T22:13:44.421426247Z" level=info msg="StartContainer for \"92ffc66b4b5de9f52200640d85ab0e18da4e0d2c1abf33bc0e846736983de0cd\" returns successfully" Aug 5 22:13:44.728885 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:13:44.729058 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:13:45.665869 systemd-networkd[1574]: vxlan.calico: Link UP Aug 5 22:13:45.665881 systemd-networkd[1574]: vxlan.calico: Gained carrier Aug 5 22:13:47.352891 containerd[1689]: time="2024-08-05T22:13:47.352828801Z" level=info msg="StopPodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\"" Aug 5 22:13:47.398523 kubelet[3200]: I0805 22:13:47.398476 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-tm4ql" podStartSLOduration=4.3772929959999995 podCreationTimestamp="2024-08-05 22:13:12 +0000 UTC" firstStartedPulling="2024-08-05 22:13:13.247350421 +0000 UTC m=+14.985785863" lastFinishedPulling="2024-08-05 22:13:44.268325555 +0000 UTC m=+46.006760897" observedRunningTime="2024-08-05 22:13:44.535295623 +0000 UTC m=+46.273730965" watchObservedRunningTime="2024-08-05 22:13:47.39826803 +0000 UTC m=+49.136703372" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.398 [INFO][4479] k8s.go 608: Cleaning up netns ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.398 [INFO][4479] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" iface="eth0" netns="/var/run/netns/cni-71a9c4c4-6a07-4d44-6f82-4bec1362d80b" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.401 [INFO][4479] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" iface="eth0" netns="/var/run/netns/cni-71a9c4c4-6a07-4d44-6f82-4bec1362d80b" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.402 [INFO][4479] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" iface="eth0" netns="/var/run/netns/cni-71a9c4c4-6a07-4d44-6f82-4bec1362d80b" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.402 [INFO][4479] k8s.go 615: Releasing IP address(es) ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.402 [INFO][4479] utils.go 188: Calico CNI releasing IP address ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.424 [INFO][4485] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.424 [INFO][4485] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.424 [INFO][4485] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.429 [WARNING][4485] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.429 [INFO][4485] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.430 [INFO][4485] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:47.433851 containerd[1689]: 2024-08-05 22:13:47.431 [INFO][4479] k8s.go 621: Teardown processing complete. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:47.435756 containerd[1689]: time="2024-08-05T22:13:47.434031382Z" level=info msg="TearDown network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" successfully" Aug 5 22:13:47.435756 containerd[1689]: time="2024-08-05T22:13:47.434726194Z" level=info msg="StopPodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" returns successfully" Aug 5 22:13:47.435756 containerd[1689]: time="2024-08-05T22:13:47.435488408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jrgt4,Uid:a980bbae-20fa-4436-b32d-bc71467778b8,Namespace:kube-system,Attempt:1,}" Aug 5 22:13:47.438975 systemd[1]: run-netns-cni\x2d71a9c4c4\x2d6a07\x2d4d44\x2d6f82\x2d4bec1362d80b.mount: Deactivated successfully. Aug 5 22:13:47.457188 systemd-networkd[1574]: vxlan.calico: Gained IPv6LL Aug 5 22:13:47.609613 systemd-networkd[1574]: cali11a890645dc: Link UP Aug 5 22:13:47.609895 systemd-networkd[1574]: cali11a890645dc: Gained carrier Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.526 [INFO][4492] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0 coredns-5dd5756b68- kube-system a980bbae-20fa-4436-b32d-bc71467778b8 694 0 2024-08-05 22:13:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.0-a-9e76a2f9cc coredns-5dd5756b68-jrgt4 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali11a890645dc [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.526 [INFO][4492] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.557 [INFO][4502] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" HandleID="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.567 [INFO][4502] ipam_plugin.go 264: Auto assigning IP ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" HandleID="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000378960), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.0-a-9e76a2f9cc", "pod":"coredns-5dd5756b68-jrgt4", "timestamp":"2024-08-05 22:13:47.557688437 +0000 UTC"}, Hostname:"ci-3975.2.0-a-9e76a2f9cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.567 [INFO][4502] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.567 [INFO][4502] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.567 [INFO][4502] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-a-9e76a2f9cc' Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.569 [INFO][4502] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.573 [INFO][4502] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.577 [INFO][4502] ipam.go 489: Trying affinity for 192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.580 [INFO][4502] ipam.go 155: Attempting to load block cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.582 [INFO][4502] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.582 [INFO][4502] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.585 [INFO][4502] ipam.go 1685: Creating new handle: k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.590 [INFO][4502] ipam.go 1203: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.597 [INFO][4502] ipam.go 1216: Successfully claimed IPs: [192.168.71.65/26] block=192.168.71.64/26 handle="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.597 [INFO][4502] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.71.65/26] handle="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.597 [INFO][4502] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:47.629941 containerd[1689]: 2024-08-05 22:13:47.598 [INFO][4502] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.71.65/26] IPv6=[] ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" HandleID="k8s-pod-network.0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.630904 containerd[1689]: 2024-08-05 22:13:47.601 [INFO][4492] k8s.go 386: Populated endpoint ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a980bbae-20fa-4436-b32d-bc71467778b8", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"", Pod:"coredns-5dd5756b68-jrgt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11a890645dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:47.630904 containerd[1689]: 2024-08-05 22:13:47.602 [INFO][4492] k8s.go 387: Calico CNI using IPs: [192.168.71.65/32] ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.630904 containerd[1689]: 2024-08-05 22:13:47.602 [INFO][4492] dataplane_linux.go 68: Setting the host side veth name to cali11a890645dc ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.630904 containerd[1689]: 2024-08-05 22:13:47.609 [INFO][4492] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.630904 containerd[1689]: 2024-08-05 22:13:47.612 [INFO][4492] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a980bbae-20fa-4436-b32d-bc71467778b8", ResourceVersion:"694", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b", Pod:"coredns-5dd5756b68-jrgt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11a890645dc", MAC:"22:04:b0:1d:45:c2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:47.630904 containerd[1689]: 2024-08-05 22:13:47.628 [INFO][4492] k8s.go 500: Wrote updated endpoint to datastore ContainerID="0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b" Namespace="kube-system" Pod="coredns-5dd5756b68-jrgt4" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:47.659839 containerd[1689]: time="2024-08-05T22:13:47.659710897Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:47.660495 containerd[1689]: time="2024-08-05T22:13:47.659870300Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:47.660495 containerd[1689]: time="2024-08-05T22:13:47.660294108Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:47.660495 containerd[1689]: time="2024-08-05T22:13:47.660320808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:47.687913 systemd[1]: Started cri-containerd-0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b.scope - libcontainer container 0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b. Aug 5 22:13:47.739933 containerd[1689]: time="2024-08-05T22:13:47.739893859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-jrgt4,Uid:a980bbae-20fa-4436-b32d-bc71467778b8,Namespace:kube-system,Attempt:1,} returns sandbox id \"0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b\"" Aug 5 22:13:47.743906 containerd[1689]: time="2024-08-05T22:13:47.743561526Z" level=info msg="CreateContainer within sandbox \"0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:13:47.791842 containerd[1689]: time="2024-08-05T22:13:47.791700904Z" level=info msg="CreateContainer within sandbox \"0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8eb28e622f714cb8eacabdde5c50010d1265f3cccd7b0ec91f4ae9dd2d6b00f5\"" Aug 5 22:13:47.793428 containerd[1689]: time="2024-08-05T22:13:47.792836724Z" level=info msg="StartContainer for \"8eb28e622f714cb8eacabdde5c50010d1265f3cccd7b0ec91f4ae9dd2d6b00f5\"" Aug 5 22:13:47.820201 systemd[1]: Started cri-containerd-8eb28e622f714cb8eacabdde5c50010d1265f3cccd7b0ec91f4ae9dd2d6b00f5.scope - libcontainer container 8eb28e622f714cb8eacabdde5c50010d1265f3cccd7b0ec91f4ae9dd2d6b00f5. Aug 5 22:13:47.847219 containerd[1689]: time="2024-08-05T22:13:47.847013612Z" level=info msg="StartContainer for \"8eb28e622f714cb8eacabdde5c50010d1265f3cccd7b0ec91f4ae9dd2d6b00f5\" returns successfully" Aug 5 22:13:48.353157 containerd[1689]: time="2024-08-05T22:13:48.353002539Z" level=info msg="StopPodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\"" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.421 [INFO][4610] k8s.go 608: Cleaning up netns ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.421 [INFO][4610] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" iface="eth0" netns="/var/run/netns/cni-8598f48d-f94f-cbcb-7000-a178854a9f43" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.424 [INFO][4610] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" iface="eth0" netns="/var/run/netns/cni-8598f48d-f94f-cbcb-7000-a178854a9f43" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.424 [INFO][4610] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" iface="eth0" netns="/var/run/netns/cni-8598f48d-f94f-cbcb-7000-a178854a9f43" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.424 [INFO][4610] k8s.go 615: Releasing IP address(es) ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.424 [INFO][4610] utils.go 188: Calico CNI releasing IP address ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.460 [INFO][4616] ipam_plugin.go 411: Releasing address using handleID ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.460 [INFO][4616] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.460 [INFO][4616] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.465 [WARNING][4616] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.465 [INFO][4616] ipam_plugin.go 439: Releasing address using workloadID ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.467 [INFO][4616] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:48.469347 containerd[1689]: 2024-08-05 22:13:48.468 [INFO][4610] k8s.go 621: Teardown processing complete. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:48.470025 containerd[1689]: time="2024-08-05T22:13:48.469548664Z" level=info msg="TearDown network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" successfully" Aug 5 22:13:48.470025 containerd[1689]: time="2024-08-05T22:13:48.469579865Z" level=info msg="StopPodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" returns successfully" Aug 5 22:13:48.470185 containerd[1689]: time="2024-08-05T22:13:48.470122375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959968d79-6gn4k,Uid:fca8028c-8cf8-428d-8837-54675ae4c7c7,Namespace:calico-system,Attempt:1,}" Aug 5 22:13:48.473772 systemd[1]: run-netns-cni\x2d8598f48d\x2df94f\x2dcbcb\x2d7000\x2da178854a9f43.mount: Deactivated successfully. Aug 5 22:13:48.598047 kubelet[3200]: I0805 22:13:48.597978 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-jrgt4" podStartSLOduration=42.597909205 podCreationTimestamp="2024-08-05 22:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:48.55596204 +0000 UTC m=+50.294397482" watchObservedRunningTime="2024-08-05 22:13:48.597909205 +0000 UTC m=+50.336344647" Aug 5 22:13:48.683303 systemd-networkd[1574]: cali74b6b0438d9: Link UP Aug 5 22:13:48.684036 systemd-networkd[1574]: cali74b6b0438d9: Gained carrier Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.585 [INFO][4624] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0 calico-kube-controllers-7959968d79- calico-system fca8028c-8cf8-428d-8837-54675ae4c7c7 705 0 2024-08-05 22:13:12 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7959968d79 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-3975.2.0-a-9e76a2f9cc calico-kube-controllers-7959968d79-6gn4k eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali74b6b0438d9 [] []}} ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.585 [INFO][4624] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.642 [INFO][4636] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" HandleID="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.652 [INFO][4636] ipam_plugin.go 264: Auto assigning IP ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" HandleID="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000292360), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.0-a-9e76a2f9cc", "pod":"calico-kube-controllers-7959968d79-6gn4k", "timestamp":"2024-08-05 22:13:48.642723422 +0000 UTC"}, Hostname:"ci-3975.2.0-a-9e76a2f9cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.652 [INFO][4636] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.652 [INFO][4636] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.653 [INFO][4636] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-a-9e76a2f9cc' Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.654 [INFO][4636] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.659 [INFO][4636] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.663 [INFO][4636] ipam.go 489: Trying affinity for 192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.665 [INFO][4636] ipam.go 155: Attempting to load block cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.669 [INFO][4636] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.669 [INFO][4636] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.670 [INFO][4636] ipam.go 1685: Creating new handle: k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831 Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.674 [INFO][4636] ipam.go 1203: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.678 [INFO][4636] ipam.go 1216: Successfully claimed IPs: [192.168.71.66/26] block=192.168.71.64/26 handle="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.678 [INFO][4636] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.71.66/26] handle="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.678 [INFO][4636] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:48.701436 containerd[1689]: 2024-08-05 22:13:48.678 [INFO][4636] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.71.66/26] IPv6=[] ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" HandleID="k8s-pod-network.b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.704737 containerd[1689]: 2024-08-05 22:13:48.680 [INFO][4624] k8s.go 386: Populated endpoint ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0", GenerateName:"calico-kube-controllers-7959968d79-", Namespace:"calico-system", SelfLink:"", UID:"fca8028c-8cf8-428d-8837-54675ae4c7c7", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7959968d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"", Pod:"calico-kube-controllers-7959968d79-6gn4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali74b6b0438d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:48.704737 containerd[1689]: 2024-08-05 22:13:48.680 [INFO][4624] k8s.go 387: Calico CNI using IPs: [192.168.71.66/32] ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.704737 containerd[1689]: 2024-08-05 22:13:48.680 [INFO][4624] dataplane_linux.go 68: Setting the host side veth name to cali74b6b0438d9 ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.704737 containerd[1689]: 2024-08-05 22:13:48.682 [INFO][4624] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.704737 containerd[1689]: 2024-08-05 22:13:48.682 [INFO][4624] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0", GenerateName:"calico-kube-controllers-7959968d79-", Namespace:"calico-system", SelfLink:"", UID:"fca8028c-8cf8-428d-8837-54675ae4c7c7", ResourceVersion:"705", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7959968d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831", Pod:"calico-kube-controllers-7959968d79-6gn4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali74b6b0438d9", MAC:"d2:fc:20:53:2c:28", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:48.704737 containerd[1689]: 2024-08-05 22:13:48.692 [INFO][4624] k8s.go 500: Wrote updated endpoint to datastore ContainerID="b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831" Namespace="calico-system" Pod="calico-kube-controllers-7959968d79-6gn4k" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:48.745918 containerd[1689]: time="2024-08-05T22:13:48.745649699Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:48.745918 containerd[1689]: time="2024-08-05T22:13:48.745810502Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:48.745918 containerd[1689]: time="2024-08-05T22:13:48.745842602Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:48.746170 containerd[1689]: time="2024-08-05T22:13:48.745923104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:48.778005 systemd[1]: Started cri-containerd-b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831.scope - libcontainer container b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831. Aug 5 22:13:48.835925 containerd[1689]: time="2024-08-05T22:13:48.835864944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7959968d79-6gn4k,Uid:fca8028c-8cf8-428d-8837-54675ae4c7c7,Namespace:calico-system,Attempt:1,} returns sandbox id \"b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831\"" Aug 5 22:13:48.837649 containerd[1689]: time="2024-08-05T22:13:48.837619076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:13:49.056092 systemd-networkd[1574]: cali11a890645dc: Gained IPv6LL Aug 5 22:13:50.358645 containerd[1689]: time="2024-08-05T22:13:50.358258746Z" level=info msg="StopPodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\"" Aug 5 22:13:50.400107 systemd-networkd[1574]: cali74b6b0438d9: Gained IPv6LL Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.404 [INFO][4715] k8s.go 608: Cleaning up netns ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.404 [INFO][4715] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" iface="eth0" netns="/var/run/netns/cni-b814d96f-c333-1fd1-84cf-8636fdd938ae" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.405 [INFO][4715] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" iface="eth0" netns="/var/run/netns/cni-b814d96f-c333-1fd1-84cf-8636fdd938ae" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.405 [INFO][4715] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" iface="eth0" netns="/var/run/netns/cni-b814d96f-c333-1fd1-84cf-8636fdd938ae" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.405 [INFO][4715] k8s.go 615: Releasing IP address(es) ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.405 [INFO][4715] utils.go 188: Calico CNI releasing IP address ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.429 [INFO][4721] ipam_plugin.go 411: Releasing address using handleID ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.429 [INFO][4721] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.429 [INFO][4721] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.442 [WARNING][4721] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.442 [INFO][4721] ipam_plugin.go 439: Releasing address using workloadID ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.443 [INFO][4721] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:50.446871 containerd[1689]: 2024-08-05 22:13:50.445 [INFO][4715] k8s.go 621: Teardown processing complete. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:50.449347 containerd[1689]: time="2024-08-05T22:13:50.447560537Z" level=info msg="TearDown network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" successfully" Aug 5 22:13:50.449347 containerd[1689]: time="2024-08-05T22:13:50.447595437Z" level=info msg="StopPodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" returns successfully" Aug 5 22:13:50.451366 containerd[1689]: time="2024-08-05T22:13:50.451332404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9ldw,Uid:9c02db63-d93a-43b1-92bc-c342f597f8fe,Namespace:calico-system,Attempt:1,}" Aug 5 22:13:50.452938 systemd[1]: run-netns-cni\x2db814d96f\x2dc333\x2d1fd1\x2d84cf\x2d8636fdd938ae.mount: Deactivated successfully. Aug 5 22:13:50.616746 systemd-networkd[1574]: cali797616e92eb: Link UP Aug 5 22:13:50.617049 systemd-networkd[1574]: cali797616e92eb: Gained carrier Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.529 [INFO][4728] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0 csi-node-driver- calico-system 9c02db63-d93a-43b1-92bc-c342f597f8fe 723 0 2024-08-05 22:13:12 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-3975.2.0-a-9e76a2f9cc csi-node-driver-d9ldw eth0 default [] [] [kns.calico-system ksa.calico-system.default] cali797616e92eb [] []}} ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.530 [INFO][4728] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.571 [INFO][4739] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" HandleID="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.583 [INFO][4739] ipam_plugin.go 264: Auto assigning IP ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" HandleID="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efb00), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-3975.2.0-a-9e76a2f9cc", "pod":"csi-node-driver-d9ldw", "timestamp":"2024-08-05 22:13:50.571756949 +0000 UTC"}, Hostname:"ci-3975.2.0-a-9e76a2f9cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.583 [INFO][4739] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.583 [INFO][4739] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.583 [INFO][4739] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-a-9e76a2f9cc' Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.585 [INFO][4739] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.590 [INFO][4739] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.594 [INFO][4739] ipam.go 489: Trying affinity for 192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.596 [INFO][4739] ipam.go 155: Attempting to load block cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.598 [INFO][4739] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.598 [INFO][4739] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.600 [INFO][4739] ipam.go 1685: Creating new handle: k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.604 [INFO][4739] ipam.go 1203: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.611 [INFO][4739] ipam.go 1216: Successfully claimed IPs: [192.168.71.67/26] block=192.168.71.64/26 handle="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.611 [INFO][4739] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.71.67/26] handle="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.611 [INFO][4739] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:50.636211 containerd[1689]: 2024-08-05 22:13:50.611 [INFO][4739] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.71.67/26] IPv6=[] ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" HandleID="k8s-pod-network.d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.639924 containerd[1689]: 2024-08-05 22:13:50.613 [INFO][4728] k8s.go 386: Populated endpoint ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c02db63-d93a-43b1-92bc-c342f597f8fe", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"", Pod:"csi-node-driver-d9ldw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali797616e92eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:50.639924 containerd[1689]: 2024-08-05 22:13:50.613 [INFO][4728] k8s.go 387: Calico CNI using IPs: [192.168.71.67/32] ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.639924 containerd[1689]: 2024-08-05 22:13:50.613 [INFO][4728] dataplane_linux.go 68: Setting the host side veth name to cali797616e92eb ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.639924 containerd[1689]: 2024-08-05 22:13:50.616 [INFO][4728] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:50.639924 containerd[1689]: 2024-08-05 22:13:50.616 [INFO][4728] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c02db63-d93a-43b1-92bc-c342f597f8fe", ResourceVersion:"723", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf", Pod:"csi-node-driver-d9ldw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali797616e92eb", MAC:"36:ed:26:4a:ac:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:50.639924 containerd[1689]: 2024-08-05 22:13:50.628 [INFO][4728] k8s.go 500: Wrote updated endpoint to datastore ContainerID="d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf" Namespace="calico-system" Pod="csi-node-driver-d9ldw" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:51.349810 containerd[1689]: time="2024-08-05T22:13:51.349659806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:51.349810 containerd[1689]: time="2024-08-05T22:13:51.349722308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:51.349810 containerd[1689]: time="2024-08-05T22:13:51.349746908Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:51.350632 containerd[1689]: time="2024-08-05T22:13:51.349765308Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:51.353316 containerd[1689]: time="2024-08-05T22:13:51.352491857Z" level=info msg="StopPodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\"" Aug 5 22:13:51.393221 systemd[1]: Started cri-containerd-d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf.scope - libcontainer container d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf. Aug 5 22:13:51.450188 containerd[1689]: time="2024-08-05T22:13:51.450134996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d9ldw,Uid:9c02db63-d93a-43b1-92bc-c342f597f8fe,Namespace:calico-system,Attempt:1,} returns sandbox id \"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf\"" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.470 [INFO][4806] k8s.go 608: Cleaning up netns ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.470 [INFO][4806] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" iface="eth0" netns="/var/run/netns/cni-bdbc2265-c06f-bd32-461c-9b45dc8606ef" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.471 [INFO][4806] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" iface="eth0" netns="/var/run/netns/cni-bdbc2265-c06f-bd32-461c-9b45dc8606ef" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.471 [INFO][4806] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" iface="eth0" netns="/var/run/netns/cni-bdbc2265-c06f-bd32-461c-9b45dc8606ef" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.471 [INFO][4806] k8s.go 615: Releasing IP address(es) ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.471 [INFO][4806] utils.go 188: Calico CNI releasing IP address ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.507 [INFO][4829] ipam_plugin.go 411: Releasing address using handleID ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.507 [INFO][4829] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.507 [INFO][4829] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.517 [WARNING][4829] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.517 [INFO][4829] ipam_plugin.go 439: Releasing address using workloadID ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.519 [INFO][4829] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:51.523927 containerd[1689]: 2024-08-05 22:13:51.522 [INFO][4806] k8s.go 621: Teardown processing complete. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:51.529944 containerd[1689]: time="2024-08-05T22:13:51.524585723Z" level=info msg="TearDown network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" successfully" Aug 5 22:13:51.529944 containerd[1689]: time="2024-08-05T22:13:51.524624923Z" level=info msg="StopPodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" returns successfully" Aug 5 22:13:51.529944 containerd[1689]: time="2024-08-05T22:13:51.526057049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q5v27,Uid:ce9f3ff0-2240-4466-ac86-fd158dab8531,Namespace:kube-system,Attempt:1,}" Aug 5 22:13:51.532695 systemd[1]: run-netns-cni\x2dbdbc2265\x2dc06f\x2dbd32\x2d461c\x2d9b45dc8606ef.mount: Deactivated successfully. Aug 5 22:13:51.738923 systemd-networkd[1574]: cali2f08a67a8f1: Link UP Aug 5 22:13:51.741066 systemd-networkd[1574]: cali2f08a67a8f1: Gained carrier Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.630 [INFO][4835] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0 coredns-5dd5756b68- kube-system ce9f3ff0-2240-4466-ac86-fd158dab8531 732 0 2024-08-05 22:13:06 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-3975.2.0-a-9e76a2f9cc coredns-5dd5756b68-q5v27 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2f08a67a8f1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.630 [INFO][4835] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.677 [INFO][4849] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" HandleID="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.690 [INFO][4849] ipam_plugin.go 264: Auto assigning IP ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" HandleID="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002edd50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-3975.2.0-a-9e76a2f9cc", "pod":"coredns-5dd5756b68-q5v27", "timestamp":"2024-08-05 22:13:51.67770695 +0000 UTC"}, Hostname:"ci-3975.2.0-a-9e76a2f9cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.690 [INFO][4849] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.690 [INFO][4849] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.690 [INFO][4849] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-a-9e76a2f9cc' Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.693 [INFO][4849] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.701 [INFO][4849] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.709 [INFO][4849] ipam.go 489: Trying affinity for 192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.711 [INFO][4849] ipam.go 155: Attempting to load block cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.714 [INFO][4849] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.714 [INFO][4849] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.717 [INFO][4849] ipam.go 1685: Creating new handle: k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.721 [INFO][4849] ipam.go 1203: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.728 [INFO][4849] ipam.go 1216: Successfully claimed IPs: [192.168.71.68/26] block=192.168.71.64/26 handle="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.728 [INFO][4849] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.71.68/26] handle="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.728 [INFO][4849] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:51.763882 containerd[1689]: 2024-08-05 22:13:51.728 [INFO][4849] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.71.68/26] IPv6=[] ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" HandleID="k8s-pod-network.2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.764807 containerd[1689]: 2024-08-05 22:13:51.732 [INFO][4835] k8s.go 386: Populated endpoint ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ce9f3ff0-2240-4466-ac86-fd158dab8531", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"", Pod:"coredns-5dd5756b68-q5v27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f08a67a8f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:51.764807 containerd[1689]: 2024-08-05 22:13:51.732 [INFO][4835] k8s.go 387: Calico CNI using IPs: [192.168.71.68/32] ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.764807 containerd[1689]: 2024-08-05 22:13:51.732 [INFO][4835] dataplane_linux.go 68: Setting the host side veth name to cali2f08a67a8f1 ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.764807 containerd[1689]: 2024-08-05 22:13:51.738 [INFO][4835] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.764807 containerd[1689]: 2024-08-05 22:13:51.740 [INFO][4835] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ce9f3ff0-2240-4466-ac86-fd158dab8531", ResourceVersion:"732", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e", Pod:"coredns-5dd5756b68-q5v27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f08a67a8f1", MAC:"72:0d:db:ae:b2:b3", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:51.764807 containerd[1689]: 2024-08-05 22:13:51.758 [INFO][4835] k8s.go 500: Wrote updated endpoint to datastore ContainerID="2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e" Namespace="kube-system" Pod="coredns-5dd5756b68-q5v27" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:51.815580 containerd[1689]: time="2024-08-05T22:13:51.815077097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:13:51.815580 containerd[1689]: time="2024-08-05T22:13:51.815170399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:51.815580 containerd[1689]: time="2024-08-05T22:13:51.815191699Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:13:51.815580 containerd[1689]: time="2024-08-05T22:13:51.815248800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:13:51.852209 systemd[1]: run-containerd-runc-k8s.io-2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e-runc.t0U6tG.mount: Deactivated successfully. Aug 5 22:13:51.863056 systemd[1]: Started cri-containerd-2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e.scope - libcontainer container 2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e. Aug 5 22:13:51.933221 containerd[1689]: time="2024-08-05T22:13:51.933162601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-q5v27,Uid:ce9f3ff0-2240-4466-ac86-fd158dab8531,Namespace:kube-system,Attempt:1,} returns sandbox id \"2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e\"" Aug 5 22:13:51.936404 systemd-networkd[1574]: cali797616e92eb: Gained IPv6LL Aug 5 22:13:51.939867 containerd[1689]: time="2024-08-05T22:13:51.939196608Z" level=info msg="CreateContainer within sandbox \"2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:13:51.987895 containerd[1689]: time="2024-08-05T22:13:51.987766873Z" level=info msg="CreateContainer within sandbox \"2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"6a2e5b51001e688e00bde10b0c1a1ee09c1a4f034c52e4214be37a8c221af306\"" Aug 5 22:13:51.988816 containerd[1689]: time="2024-08-05T22:13:51.988774091Z" level=info msg="StartContainer for \"6a2e5b51001e688e00bde10b0c1a1ee09c1a4f034c52e4214be37a8c221af306\"" Aug 5 22:13:52.032222 systemd[1]: Started cri-containerd-6a2e5b51001e688e00bde10b0c1a1ee09c1a4f034c52e4214be37a8c221af306.scope - libcontainer container 6a2e5b51001e688e00bde10b0c1a1ee09c1a4f034c52e4214be37a8c221af306. Aug 5 22:13:52.076930 containerd[1689]: time="2024-08-05T22:13:52.076815460Z" level=info msg="StartContainer for \"6a2e5b51001e688e00bde10b0c1a1ee09c1a4f034c52e4214be37a8c221af306\" returns successfully" Aug 5 22:13:52.328022 containerd[1689]: time="2024-08-05T22:13:52.327886232Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:52.331711 containerd[1689]: time="2024-08-05T22:13:52.331634799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:13:52.337515 containerd[1689]: time="2024-08-05T22:13:52.337327300Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:52.343568 containerd[1689]: time="2024-08-05T22:13:52.343498210Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:52.344342 containerd[1689]: time="2024-08-05T22:13:52.344144022Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.506261241s" Aug 5 22:13:52.344342 containerd[1689]: time="2024-08-05T22:13:52.344185523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:13:52.345934 containerd[1689]: time="2024-08-05T22:13:52.344995537Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:13:52.354131 containerd[1689]: time="2024-08-05T22:13:52.354098699Z" level=info msg="CreateContainer within sandbox \"b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:13:52.396946 containerd[1689]: time="2024-08-05T22:13:52.396914962Z" level=info msg="CreateContainer within sandbox \"b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0acc0d8598d57a9d3ca13ef743cbe65eee182988fe91edb55bb486f5a85bd0fe\"" Aug 5 22:13:52.398598 containerd[1689]: time="2024-08-05T22:13:52.397373670Z" level=info msg="StartContainer for \"0acc0d8598d57a9d3ca13ef743cbe65eee182988fe91edb55bb486f5a85bd0fe\"" Aug 5 22:13:52.425959 systemd[1]: Started cri-containerd-0acc0d8598d57a9d3ca13ef743cbe65eee182988fe91edb55bb486f5a85bd0fe.scope - libcontainer container 0acc0d8598d57a9d3ca13ef743cbe65eee182988fe91edb55bb486f5a85bd0fe. Aug 5 22:13:52.473046 containerd[1689]: time="2024-08-05T22:13:52.472865515Z" level=info msg="StartContainer for \"0acc0d8598d57a9d3ca13ef743cbe65eee182988fe91edb55bb486f5a85bd0fe\" returns successfully" Aug 5 22:13:52.582116 kubelet[3200]: I0805 22:13:52.581980 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-7959968d79-6gn4k" podStartSLOduration=37.074377093 podCreationTimestamp="2024-08-05 22:13:12 +0000 UTC" firstStartedPulling="2024-08-05 22:13:48.837251269 +0000 UTC m=+50.575686611" lastFinishedPulling="2024-08-05 22:13:52.344804134 +0000 UTC m=+54.083239576" observedRunningTime="2024-08-05 22:13:52.579153008 +0000 UTC m=+54.317588450" watchObservedRunningTime="2024-08-05 22:13:52.581930058 +0000 UTC m=+54.320365400" Aug 5 22:13:52.605220 kubelet[3200]: I0805 22:13:52.603584 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-q5v27" podStartSLOduration=46.603534342 podCreationTimestamp="2024-08-05 22:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:13:52.603189436 +0000 UTC m=+54.341624778" watchObservedRunningTime="2024-08-05 22:13:52.603534342 +0000 UTC m=+54.341969684" Aug 5 22:13:52.832062 systemd-networkd[1574]: cali2f08a67a8f1: Gained IPv6LL Aug 5 22:13:53.817794 containerd[1689]: time="2024-08-05T22:13:53.817723271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:53.821031 containerd[1689]: time="2024-08-05T22:13:53.820957029Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:13:53.823559 containerd[1689]: time="2024-08-05T22:13:53.823479974Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:53.827889 containerd[1689]: time="2024-08-05T22:13:53.827825051Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:53.829051 containerd[1689]: time="2024-08-05T22:13:53.828493963Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.483462426s" Aug 5 22:13:53.829051 containerd[1689]: time="2024-08-05T22:13:53.828534564Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:13:53.830618 containerd[1689]: time="2024-08-05T22:13:53.830588501Z" level=info msg="CreateContainer within sandbox \"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:13:53.875373 containerd[1689]: time="2024-08-05T22:13:53.875332098Z" level=info msg="CreateContainer within sandbox \"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"d65f9a0899761d83330201ed911f234e7b2a56ee558b5d1d1c09c2d4fd685988\"" Aug 5 22:13:53.875939 containerd[1689]: time="2024-08-05T22:13:53.875829507Z" level=info msg="StartContainer for \"d65f9a0899761d83330201ed911f234e7b2a56ee558b5d1d1c09c2d4fd685988\"" Aug 5 22:13:53.911022 systemd[1]: run-containerd-runc-k8s.io-d65f9a0899761d83330201ed911f234e7b2a56ee558b5d1d1c09c2d4fd685988-runc.CFa5eK.mount: Deactivated successfully. Aug 5 22:13:53.918278 systemd[1]: Started cri-containerd-d65f9a0899761d83330201ed911f234e7b2a56ee558b5d1d1c09c2d4fd685988.scope - libcontainer container d65f9a0899761d83330201ed911f234e7b2a56ee558b5d1d1c09c2d4fd685988. Aug 5 22:13:53.948071 containerd[1689]: time="2024-08-05T22:13:53.947983092Z" level=info msg="StartContainer for \"d65f9a0899761d83330201ed911f234e7b2a56ee558b5d1d1c09c2d4fd685988\" returns successfully" Aug 5 22:13:53.949583 containerd[1689]: time="2024-08-05T22:13:53.949555520Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:13:56.017725 containerd[1689]: time="2024-08-05T22:13:56.017589659Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:56.019980 containerd[1689]: time="2024-08-05T22:13:56.019840799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:13:56.026724 containerd[1689]: time="2024-08-05T22:13:56.026663221Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:56.035810 containerd[1689]: time="2024-08-05T22:13:56.035730682Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:13:56.036586 containerd[1689]: time="2024-08-05T22:13:56.036423294Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 2.08663197s" Aug 5 22:13:56.036586 containerd[1689]: time="2024-08-05T22:13:56.036465595Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:13:56.039153 containerd[1689]: time="2024-08-05T22:13:56.038758436Z" level=info msg="CreateContainer within sandbox \"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:13:56.095259 containerd[1689]: time="2024-08-05T22:13:56.095134040Z" level=info msg="CreateContainer within sandbox \"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"4b5e83866dbe7f2edf4a0da13bb9de7cb5ca8a5f1fccb8fb4c381d004b45a740\"" Aug 5 22:13:56.096111 containerd[1689]: time="2024-08-05T22:13:56.096070757Z" level=info msg="StartContainer for \"4b5e83866dbe7f2edf4a0da13bb9de7cb5ca8a5f1fccb8fb4c381d004b45a740\"" Aug 5 22:13:56.131708 systemd[1]: run-containerd-runc-k8s.io-4b5e83866dbe7f2edf4a0da13bb9de7cb5ca8a5f1fccb8fb4c381d004b45a740-runc.bloJ35.mount: Deactivated successfully. Aug 5 22:13:56.142956 systemd[1]: Started cri-containerd-4b5e83866dbe7f2edf4a0da13bb9de7cb5ca8a5f1fccb8fb4c381d004b45a740.scope - libcontainer container 4b5e83866dbe7f2edf4a0da13bb9de7cb5ca8a5f1fccb8fb4c381d004b45a740. Aug 5 22:13:56.177149 containerd[1689]: time="2024-08-05T22:13:56.176885797Z" level=info msg="StartContainer for \"4b5e83866dbe7f2edf4a0da13bb9de7cb5ca8a5f1fccb8fb4c381d004b45a740\" returns successfully" Aug 5 22:13:56.460224 kubelet[3200]: I0805 22:13:56.460179 3200 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:13:56.460224 kubelet[3200]: I0805 22:13:56.460217 3200 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:13:58.342171 containerd[1689]: time="2024-08-05T22:13:58.342052814Z" level=info msg="StopPodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\"" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.377 [WARNING][5108] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0", GenerateName:"calico-kube-controllers-7959968d79-", Namespace:"calico-system", SelfLink:"", UID:"fca8028c-8cf8-428d-8837-54675ae4c7c7", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7959968d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831", Pod:"calico-kube-controllers-7959968d79-6gn4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali74b6b0438d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.377 [INFO][5108] k8s.go 608: Cleaning up netns ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.377 [INFO][5108] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" iface="eth0" netns="" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.377 [INFO][5108] k8s.go 615: Releasing IP address(es) ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.378 [INFO][5108] utils.go 188: Calico CNI releasing IP address ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.396 [INFO][5116] ipam_plugin.go 411: Releasing address using handleID ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.396 [INFO][5116] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.396 [INFO][5116] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.401 [WARNING][5116] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.401 [INFO][5116] ipam_plugin.go 439: Releasing address using workloadID ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.403 [INFO][5116] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.405008 containerd[1689]: 2024-08-05 22:13:58.403 [INFO][5108] k8s.go 621: Teardown processing complete. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.405820 containerd[1689]: time="2024-08-05T22:13:58.405046258Z" level=info msg="TearDown network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" successfully" Aug 5 22:13:58.405820 containerd[1689]: time="2024-08-05T22:13:58.405084359Z" level=info msg="StopPodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" returns successfully" Aug 5 22:13:58.405820 containerd[1689]: time="2024-08-05T22:13:58.405754871Z" level=info msg="RemovePodSandbox for \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\"" Aug 5 22:13:58.405964 containerd[1689]: time="2024-08-05T22:13:58.405868073Z" level=info msg="Forcibly stopping sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\"" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.443 [WARNING][5135] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0", GenerateName:"calico-kube-controllers-7959968d79-", Namespace:"calico-system", SelfLink:"", UID:"fca8028c-8cf8-428d-8837-54675ae4c7c7", ResourceVersion:"752", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7959968d79", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"b3865a98e4fdc41b45b7a4a602fb5a24b5373650bd13bb50d051716315a22831", Pod:"calico-kube-controllers-7959968d79-6gn4k", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.71.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali74b6b0438d9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.443 [INFO][5135] k8s.go 608: Cleaning up netns ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.443 [INFO][5135] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" iface="eth0" netns="" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.443 [INFO][5135] k8s.go 615: Releasing IP address(es) ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.443 [INFO][5135] utils.go 188: Calico CNI releasing IP address ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.465 [INFO][5141] ipam_plugin.go 411: Releasing address using handleID ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.465 [INFO][5141] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.466 [INFO][5141] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.471 [WARNING][5141] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.471 [INFO][5141] ipam_plugin.go 439: Releasing address using workloadID ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" HandleID="k8s-pod-network.db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--kube--controllers--7959968d79--6gn4k-eth0" Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.473 [INFO][5141] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.474794 containerd[1689]: 2024-08-05 22:13:58.473 [INFO][5135] k8s.go 621: Teardown processing complete. ContainerID="db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce" Aug 5 22:13:58.475436 containerd[1689]: time="2024-08-05T22:13:58.474834225Z" level=info msg="TearDown network for sandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" successfully" Aug 5 22:13:58.483765 containerd[1689]: time="2024-08-05T22:13:58.483715787Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:13:58.483902 containerd[1689]: time="2024-08-05T22:13:58.483806388Z" level=info msg="RemovePodSandbox \"db5539fd0936fd6e0222bd627942db9dbf5bd62f558d77fd3cce47d44936e9ce\" returns successfully" Aug 5 22:13:58.484264 containerd[1689]: time="2024-08-05T22:13:58.484231896Z" level=info msg="StopPodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\"" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.525 [WARNING][5159] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a980bbae-20fa-4436-b32d-bc71467778b8", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b", Pod:"coredns-5dd5756b68-jrgt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11a890645dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.526 [INFO][5159] k8s.go 608: Cleaning up netns ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.526 [INFO][5159] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" iface="eth0" netns="" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.526 [INFO][5159] k8s.go 615: Releasing IP address(es) ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.526 [INFO][5159] utils.go 188: Calico CNI releasing IP address ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.546 [INFO][5165] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.546 [INFO][5165] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.546 [INFO][5165] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.553 [WARNING][5165] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.553 [INFO][5165] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.555 [INFO][5165] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.560298 containerd[1689]: 2024-08-05 22:13:58.558 [INFO][5159] k8s.go 621: Teardown processing complete. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.561053 containerd[1689]: time="2024-08-05T22:13:58.560378079Z" level=info msg="TearDown network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" successfully" Aug 5 22:13:58.561053 containerd[1689]: time="2024-08-05T22:13:58.560406879Z" level=info msg="StopPodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" returns successfully" Aug 5 22:13:58.561053 containerd[1689]: time="2024-08-05T22:13:58.561035991Z" level=info msg="RemovePodSandbox for \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\"" Aug 5 22:13:58.561172 containerd[1689]: time="2024-08-05T22:13:58.561085092Z" level=info msg="Forcibly stopping sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\"" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.598 [WARNING][5185] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"a980bbae-20fa-4436-b32d-bc71467778b8", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"0ff9310d912104e48d58f4ba117a864f606c0e0e82dd160efdedf31148794c0b", Pod:"coredns-5dd5756b68-jrgt4", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali11a890645dc", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.598 [INFO][5185] k8s.go 608: Cleaning up netns ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.598 [INFO][5185] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" iface="eth0" netns="" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.598 [INFO][5185] k8s.go 615: Releasing IP address(es) ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.598 [INFO][5185] utils.go 188: Calico CNI releasing IP address ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.616 [INFO][5191] ipam_plugin.go 411: Releasing address using handleID ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.616 [INFO][5191] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.616 [INFO][5191] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.621 [WARNING][5191] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.621 [INFO][5191] ipam_plugin.go 439: Releasing address using workloadID ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" HandleID="k8s-pod-network.6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--jrgt4-eth0" Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.622 [INFO][5191] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.624224 containerd[1689]: 2024-08-05 22:13:58.623 [INFO][5185] k8s.go 621: Teardown processing complete. ContainerID="6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6" Aug 5 22:13:58.624224 containerd[1689]: time="2024-08-05T22:13:58.624184637Z" level=info msg="TearDown network for sandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" successfully" Aug 5 22:13:58.638291 containerd[1689]: time="2024-08-05T22:13:58.638245093Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:13:58.638451 containerd[1689]: time="2024-08-05T22:13:58.638314194Z" level=info msg="RemovePodSandbox \"6d12ae9012a144ba5a2247b434a8bba76efac6558529e0f14b9aa1109ffc60f6\" returns successfully" Aug 5 22:13:58.638887 containerd[1689]: time="2024-08-05T22:13:58.638855004Z" level=info msg="StopPodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\"" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.670 [WARNING][5209] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c02db63-d93a-43b1-92bc-c342f597f8fe", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf", Pod:"csi-node-driver-d9ldw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali797616e92eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.670 [INFO][5209] k8s.go 608: Cleaning up netns ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.670 [INFO][5209] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" iface="eth0" netns="" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.670 [INFO][5209] k8s.go 615: Releasing IP address(es) ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.670 [INFO][5209] utils.go 188: Calico CNI releasing IP address ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.689 [INFO][5215] ipam_plugin.go 411: Releasing address using handleID ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.689 [INFO][5215] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.689 [INFO][5215] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.693 [WARNING][5215] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.693 [INFO][5215] ipam_plugin.go 439: Releasing address using workloadID ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.694 [INFO][5215] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.696761 containerd[1689]: 2024-08-05 22:13:58.695 [INFO][5209] k8s.go 621: Teardown processing complete. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.697933 containerd[1689]: time="2024-08-05T22:13:58.696830656Z" level=info msg="TearDown network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" successfully" Aug 5 22:13:58.697933 containerd[1689]: time="2024-08-05T22:13:58.696863457Z" level=info msg="StopPodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" returns successfully" Aug 5 22:13:58.697933 containerd[1689]: time="2024-08-05T22:13:58.697385667Z" level=info msg="RemovePodSandbox for \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\"" Aug 5 22:13:58.697933 containerd[1689]: time="2024-08-05T22:13:58.697421567Z" level=info msg="Forcibly stopping sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\"" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.727 [WARNING][5233] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9c02db63-d93a-43b1-92bc-c342f597f8fe", ResourceVersion:"773", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"d6f0ea16c695f1d2d868343fe6252a548b527eec3db21db518baeeff2ee06bdf", Pod:"csi-node-driver-d9ldw", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.71.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"cali797616e92eb", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.727 [INFO][5233] k8s.go 608: Cleaning up netns ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.728 [INFO][5233] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" iface="eth0" netns="" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.728 [INFO][5233] k8s.go 615: Releasing IP address(es) ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.728 [INFO][5233] utils.go 188: Calico CNI releasing IP address ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.746 [INFO][5239] ipam_plugin.go 411: Releasing address using handleID ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.746 [INFO][5239] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.746 [INFO][5239] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.751 [WARNING][5239] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.751 [INFO][5239] ipam_plugin.go 439: Releasing address using workloadID ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" HandleID="k8s-pod-network.be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-csi--node--driver--d9ldw-eth0" Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.752 [INFO][5239] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.754051 containerd[1689]: 2024-08-05 22:13:58.753 [INFO][5233] k8s.go 621: Teardown processing complete. ContainerID="be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3" Aug 5 22:13:58.754694 containerd[1689]: time="2024-08-05T22:13:58.754096396Z" level=info msg="TearDown network for sandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" successfully" Aug 5 22:13:58.783169 containerd[1689]: time="2024-08-05T22:13:58.782965821Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:13:58.783169 containerd[1689]: time="2024-08-05T22:13:58.783148124Z" level=info msg="RemovePodSandbox \"be5a0bf0ba3559b2a160168c3a8705b3050ad3249890fe13ca7612c09363a5f3\" returns successfully" Aug 5 22:13:58.783794 containerd[1689]: time="2024-08-05T22:13:58.783737635Z" level=info msg="StopPodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\"" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.817 [WARNING][5257] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ce9f3ff0-2240-4466-ac86-fd158dab8531", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e", Pod:"coredns-5dd5756b68-q5v27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f08a67a8f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.818 [INFO][5257] k8s.go 608: Cleaning up netns ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.818 [INFO][5257] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" iface="eth0" netns="" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.818 [INFO][5257] k8s.go 615: Releasing IP address(es) ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.818 [INFO][5257] utils.go 188: Calico CNI releasing IP address ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.836 [INFO][5263] ipam_plugin.go 411: Releasing address using handleID ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.836 [INFO][5263] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.837 [INFO][5263] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.841 [WARNING][5263] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.841 [INFO][5263] ipam_plugin.go 439: Releasing address using workloadID ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.842 [INFO][5263] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.844488 containerd[1689]: 2024-08-05 22:13:58.843 [INFO][5257] k8s.go 621: Teardown processing complete. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.845150 containerd[1689]: time="2024-08-05T22:13:58.844538039Z" level=info msg="TearDown network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" successfully" Aug 5 22:13:58.845150 containerd[1689]: time="2024-08-05T22:13:58.844571339Z" level=info msg="StopPodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" returns successfully" Aug 5 22:13:58.845319 containerd[1689]: time="2024-08-05T22:13:58.845277352Z" level=info msg="RemovePodSandbox for \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\"" Aug 5 22:13:58.845319 containerd[1689]: time="2024-08-05T22:13:58.845313553Z" level=info msg="Forcibly stopping sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\"" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.877 [WARNING][5281] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"ce9f3ff0-2240-4466-ac86-fd158dab8531", ResourceVersion:"747", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 13, 6, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"2037637b59c206ce3e513a55cf7e19abf31dbcc021449598d187b388a15f7d4e", Pod:"coredns-5dd5756b68-q5v27", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.71.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2f08a67a8f1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.877 [INFO][5281] k8s.go 608: Cleaning up netns ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.877 [INFO][5281] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" iface="eth0" netns="" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.877 [INFO][5281] k8s.go 615: Releasing IP address(es) ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.877 [INFO][5281] utils.go 188: Calico CNI releasing IP address ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.895 [INFO][5287] ipam_plugin.go 411: Releasing address using handleID ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.895 [INFO][5287] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.895 [INFO][5287] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.900 [WARNING][5287] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.900 [INFO][5287] ipam_plugin.go 439: Releasing address using workloadID ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" HandleID="k8s-pod-network.69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-coredns--5dd5756b68--q5v27-eth0" Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.902 [INFO][5287] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:13:58.905855 containerd[1689]: 2024-08-05 22:13:58.904 [INFO][5281] k8s.go 621: Teardown processing complete. ContainerID="69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d" Aug 5 22:13:58.905855 containerd[1689]: time="2024-08-05T22:13:58.905769351Z" level=info msg="TearDown network for sandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" successfully" Aug 5 22:13:58.916198 containerd[1689]: time="2024-08-05T22:13:58.916155139Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:13:58.916315 containerd[1689]: time="2024-08-05T22:13:58.916223840Z" level=info msg="RemovePodSandbox \"69fe3e26b44cadedfe739ababd6a960569ea35cfa00e4683a11da354c3daa42d\" returns successfully" Aug 5 22:14:02.554557 systemd[1]: run-containerd-runc-k8s.io-0acc0d8598d57a9d3ca13ef743cbe65eee182988fe91edb55bb486f5a85bd0fe-runc.5rtxMh.mount: Deactivated successfully. Aug 5 22:14:12.375094 systemd[1]: run-containerd-runc-k8s.io-92ffc66b4b5de9f52200640d85ab0e18da4e0d2c1abf33bc0e846736983de0cd-runc.94sPfX.mount: Deactivated successfully. Aug 5 22:14:12.433923 kubelet[3200]: I0805 22:14:12.433885 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-d9ldw" podStartSLOduration=55.85149625 podCreationTimestamp="2024-08-05 22:13:12 +0000 UTC" firstStartedPulling="2024-08-05 22:13:51.454619176 +0000 UTC m=+53.193054618" lastFinishedPulling="2024-08-05 22:13:56.036936304 +0000 UTC m=+57.775371646" observedRunningTime="2024-08-05 22:13:56.587371409 +0000 UTC m=+58.325806851" watchObservedRunningTime="2024-08-05 22:14:12.433813278 +0000 UTC m=+74.172248620" Aug 5 22:14:17.356154 kubelet[3200]: I0805 22:14:17.355773 3200 topology_manager.go:215] "Topology Admit Handler" podUID="1ccf79ac-8dd2-4892-a2c5-46b90649fe8d" podNamespace="calico-apiserver" podName="calico-apiserver-f5fc79cf4-8dlld" Aug 5 22:14:17.367702 systemd[1]: Created slice kubepods-besteffort-pod1ccf79ac_8dd2_4892_a2c5_46b90649fe8d.slice - libcontainer container kubepods-besteffort-pod1ccf79ac_8dd2_4892_a2c5_46b90649fe8d.slice. Aug 5 22:14:17.385735 kubelet[3200]: I0805 22:14:17.385704 3200 topology_manager.go:215] "Topology Admit Handler" podUID="43afebc7-f6be-425e-afe9-09804abc6819" podNamespace="calico-apiserver" podName="calico-apiserver-f5fc79cf4-885qp" Aug 5 22:14:17.397387 systemd[1]: Created slice kubepods-besteffort-pod43afebc7_f6be_425e_afe9_09804abc6819.slice - libcontainer container kubepods-besteffort-pod43afebc7_f6be_425e_afe9_09804abc6819.slice. Aug 5 22:14:17.473618 kubelet[3200]: I0805 22:14:17.473535 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sx9g\" (UniqueName: \"kubernetes.io/projected/1ccf79ac-8dd2-4892-a2c5-46b90649fe8d-kube-api-access-5sx9g\") pod \"calico-apiserver-f5fc79cf4-8dlld\" (UID: \"1ccf79ac-8dd2-4892-a2c5-46b90649fe8d\") " pod="calico-apiserver/calico-apiserver-f5fc79cf4-8dlld" Aug 5 22:14:17.473618 kubelet[3200]: I0805 22:14:17.473593 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccnv5\" (UniqueName: \"kubernetes.io/projected/43afebc7-f6be-425e-afe9-09804abc6819-kube-api-access-ccnv5\") pod \"calico-apiserver-f5fc79cf4-885qp\" (UID: \"43afebc7-f6be-425e-afe9-09804abc6819\") " pod="calico-apiserver/calico-apiserver-f5fc79cf4-885qp" Aug 5 22:14:17.473863 kubelet[3200]: I0805 22:14:17.473678 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/43afebc7-f6be-425e-afe9-09804abc6819-calico-apiserver-certs\") pod \"calico-apiserver-f5fc79cf4-885qp\" (UID: \"43afebc7-f6be-425e-afe9-09804abc6819\") " pod="calico-apiserver/calico-apiserver-f5fc79cf4-885qp" Aug 5 22:14:17.473863 kubelet[3200]: I0805 22:14:17.473731 3200 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/1ccf79ac-8dd2-4892-a2c5-46b90649fe8d-calico-apiserver-certs\") pod \"calico-apiserver-f5fc79cf4-8dlld\" (UID: \"1ccf79ac-8dd2-4892-a2c5-46b90649fe8d\") " pod="calico-apiserver/calico-apiserver-f5fc79cf4-8dlld" Aug 5 22:14:17.574462 kubelet[3200]: E0805 22:14:17.574410 3200 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:14:17.574647 kubelet[3200]: E0805 22:14:17.574537 3200 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ccf79ac-8dd2-4892-a2c5-46b90649fe8d-calico-apiserver-certs podName:1ccf79ac-8dd2-4892-a2c5-46b90649fe8d nodeName:}" failed. No retries permitted until 2024-08-05 22:14:18.074496358 +0000 UTC m=+79.812931700 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1ccf79ac-8dd2-4892-a2c5-46b90649fe8d-calico-apiserver-certs") pod "calico-apiserver-f5fc79cf4-8dlld" (UID: "1ccf79ac-8dd2-4892-a2c5-46b90649fe8d") : secret "calico-apiserver-certs" not found Aug 5 22:14:17.575291 kubelet[3200]: E0805 22:14:17.575257 3200 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:14:17.575467 kubelet[3200]: E0805 22:14:17.575325 3200 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43afebc7-f6be-425e-afe9-09804abc6819-calico-apiserver-certs podName:43afebc7-f6be-425e-afe9-09804abc6819 nodeName:}" failed. No retries permitted until 2024-08-05 22:14:18.075307172 +0000 UTC m=+79.813742614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/43afebc7-f6be-425e-afe9-09804abc6819-calico-apiserver-certs") pod "calico-apiserver-f5fc79cf4-885qp" (UID: "43afebc7-f6be-425e-afe9-09804abc6819") : secret "calico-apiserver-certs" not found Aug 5 22:14:18.077135 kubelet[3200]: E0805 22:14:18.077088 3200 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:14:18.077498 kubelet[3200]: E0805 22:14:18.077181 3200 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/1ccf79ac-8dd2-4892-a2c5-46b90649fe8d-calico-apiserver-certs podName:1ccf79ac-8dd2-4892-a2c5-46b90649fe8d nodeName:}" failed. No retries permitted until 2024-08-05 22:14:19.077160192 +0000 UTC m=+80.815595534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/1ccf79ac-8dd2-4892-a2c5-46b90649fe8d-calico-apiserver-certs") pod "calico-apiserver-f5fc79cf4-8dlld" (UID: "1ccf79ac-8dd2-4892-a2c5-46b90649fe8d") : secret "calico-apiserver-certs" not found Aug 5 22:14:18.077873 kubelet[3200]: E0805 22:14:18.077088 3200 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:14:18.077873 kubelet[3200]: E0805 22:14:18.077661 3200 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/43afebc7-f6be-425e-afe9-09804abc6819-calico-apiserver-certs podName:43afebc7-f6be-425e-afe9-09804abc6819 nodeName:}" failed. No retries permitted until 2024-08-05 22:14:19.0776356 +0000 UTC m=+80.816070942 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/43afebc7-f6be-425e-afe9-09804abc6819-calico-apiserver-certs") pod "calico-apiserver-f5fc79cf4-885qp" (UID: "43afebc7-f6be-425e-afe9-09804abc6819") : secret "calico-apiserver-certs" not found Aug 5 22:14:20.673974 containerd[1689]: time="2024-08-05T22:14:20.673920361Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f5fc79cf4-8dlld,Uid:1ccf79ac-8dd2-4892-a2c5-46b90649fe8d,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:14:20.704845 containerd[1689]: time="2024-08-05T22:14:20.704803516Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f5fc79cf4-885qp,Uid:43afebc7-f6be-425e-afe9-09804abc6819,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:14:20.899127 systemd-networkd[1574]: cali12628613910: Link UP Aug 5 22:14:20.902009 systemd-networkd[1574]: cali12628613910: Gained carrier Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.787 [INFO][5365] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0 calico-apiserver-f5fc79cf4- calico-apiserver 1ccf79ac-8dd2-4892-a2c5-46b90649fe8d 854 0 2024-08-05 22:14:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f5fc79cf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.0-a-9e76a2f9cc calico-apiserver-f5fc79cf4-8dlld eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali12628613910 [] []}} ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.788 [INFO][5365] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.831 [INFO][5386] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" HandleID="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.841 [INFO][5386] ipam_plugin.go 264: Auto assigning IP ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" HandleID="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002eddc0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.0-a-9e76a2f9cc", "pod":"calico-apiserver-f5fc79cf4-8dlld", "timestamp":"2024-08-05 22:14:20.831615695 +0000 UTC"}, Hostname:"ci-3975.2.0-a-9e76a2f9cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.841 [INFO][5386] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.841 [INFO][5386] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.841 [INFO][5386] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-a-9e76a2f9cc' Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.843 [INFO][5386] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.849 [INFO][5386] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.855 [INFO][5386] ipam.go 489: Trying affinity for 192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.858 [INFO][5386] ipam.go 155: Attempting to load block cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.860 [INFO][5386] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.861 [INFO][5386] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.867 [INFO][5386] ipam.go 1685: Creating new handle: k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.875 [INFO][5386] ipam.go 1203: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.880 [INFO][5386] ipam.go 1216: Successfully claimed IPs: [192.168.71.69/26] block=192.168.71.64/26 handle="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.880 [INFO][5386] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.71.69/26] handle="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.881 [INFO][5386] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:20.923748 containerd[1689]: 2024-08-05 22:14:20.881 [INFO][5386] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.71.69/26] IPv6=[] ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" HandleID="k8s-pod-network.7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.926774 containerd[1689]: 2024-08-05 22:14:20.883 [INFO][5365] k8s.go 386: Populated endpoint ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0", GenerateName:"calico-apiserver-f5fc79cf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ccf79ac-8dd2-4892-a2c5-46b90649fe8d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f5fc79cf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"", Pod:"calico-apiserver-f5fc79cf4-8dlld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12628613910", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:20.926774 containerd[1689]: 2024-08-05 22:14:20.884 [INFO][5365] k8s.go 387: Calico CNI using IPs: [192.168.71.69/32] ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.926774 containerd[1689]: 2024-08-05 22:14:20.884 [INFO][5365] dataplane_linux.go 68: Setting the host side veth name to cali12628613910 ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.926774 containerd[1689]: 2024-08-05 22:14:20.902 [INFO][5365] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.926774 containerd[1689]: 2024-08-05 22:14:20.905 [INFO][5365] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0", GenerateName:"calico-apiserver-f5fc79cf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"1ccf79ac-8dd2-4892-a2c5-46b90649fe8d", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f5fc79cf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c", Pod:"calico-apiserver-f5fc79cf4-8dlld", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali12628613910", MAC:"22:ea:42:bb:df:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:20.926774 containerd[1689]: 2024-08-05 22:14:20.918 [INFO][5365] k8s.go 500: Wrote updated endpoint to datastore ContainerID="7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-8dlld" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--8dlld-eth0" Aug 5 22:14:20.944089 systemd-networkd[1574]: cali0dceb4a8948: Link UP Aug 5 22:14:20.948916 systemd-networkd[1574]: cali0dceb4a8948: Gained carrier Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.818 [INFO][5375] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0 calico-apiserver-f5fc79cf4- calico-apiserver 43afebc7-f6be-425e-afe9-09804abc6819 860 0 2024-08-05 22:14:17 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f5fc79cf4 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-3975.2.0-a-9e76a2f9cc calico-apiserver-f5fc79cf4-885qp eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0dceb4a8948 [] []}} ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.818 [INFO][5375] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.863 [INFO][5392] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" HandleID="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.878 [INFO][5392] ipam_plugin.go 264: Auto assigning IP ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" HandleID="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318300), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-3975.2.0-a-9e76a2f9cc", "pod":"calico-apiserver-f5fc79cf4-885qp", "timestamp":"2024-08-05 22:14:20.863550769 +0000 UTC"}, Hostname:"ci-3975.2.0-a-9e76a2f9cc", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.878 [INFO][5392] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.881 [INFO][5392] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.881 [INFO][5392] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-3975.2.0-a-9e76a2f9cc' Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.883 [INFO][5392] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.892 [INFO][5392] ipam.go 372: Looking up existing affinities for host host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.906 [INFO][5392] ipam.go 489: Trying affinity for 192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.911 [INFO][5392] ipam.go 155: Attempting to load block cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.915 [INFO][5392] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.71.64/26 host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.915 [INFO][5392] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.71.64/26 handle="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.917 [INFO][5392] ipam.go 1685: Creating new handle: k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.923 [INFO][5392] ipam.go 1203: Writing block in order to claim IPs block=192.168.71.64/26 handle="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.934 [INFO][5392] ipam.go 1216: Successfully claimed IPs: [192.168.71.70/26] block=192.168.71.64/26 handle="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.934 [INFO][5392] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.71.70/26] handle="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" host="ci-3975.2.0-a-9e76a2f9cc" Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.934 [INFO][5392] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:14:20.966338 containerd[1689]: 2024-08-05 22:14:20.934 [INFO][5392] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.71.70/26] IPv6=[] ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" HandleID="k8s-pod-network.316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Workload="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.967460 containerd[1689]: 2024-08-05 22:14:20.939 [INFO][5375] k8s.go 386: Populated endpoint ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0", GenerateName:"calico-apiserver-f5fc79cf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"43afebc7-f6be-425e-afe9-09804abc6819", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f5fc79cf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"", Pod:"calico-apiserver-f5fc79cf4-885qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dceb4a8948", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:20.967460 containerd[1689]: 2024-08-05 22:14:20.939 [INFO][5375] k8s.go 387: Calico CNI using IPs: [192.168.71.70/32] ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.967460 containerd[1689]: 2024-08-05 22:14:20.939 [INFO][5375] dataplane_linux.go 68: Setting the host side veth name to cali0dceb4a8948 ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.967460 containerd[1689]: 2024-08-05 22:14:20.949 [INFO][5375] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.967460 containerd[1689]: 2024-08-05 22:14:20.950 [INFO][5375] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0", GenerateName:"calico-apiserver-f5fc79cf4-", Namespace:"calico-apiserver", SelfLink:"", UID:"43afebc7-f6be-425e-afe9-09804abc6819", ResourceVersion:"860", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 14, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f5fc79cf4", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-3975.2.0-a-9e76a2f9cc", ContainerID:"316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa", Pod:"calico-apiserver-f5fc79cf4-885qp", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.71.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0dceb4a8948", MAC:"02:08:c6:4b:d5:9c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:14:20.967460 containerd[1689]: 2024-08-05 22:14:20.962 [INFO][5375] k8s.go 500: Wrote updated endpoint to datastore ContainerID="316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa" Namespace="calico-apiserver" Pod="calico-apiserver-f5fc79cf4-885qp" WorkloadEndpoint="ci--3975.2.0--a--9e76a2f9cc-k8s-calico--apiserver--f5fc79cf4--885qp-eth0" Aug 5 22:14:20.994910 containerd[1689]: time="2024-08-05T22:14:20.993857311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:20.995273 containerd[1689]: time="2024-08-05T22:14:20.995224136Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:20.996125 containerd[1689]: time="2024-08-05T22:14:20.995964949Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:20.996125 containerd[1689]: time="2024-08-05T22:14:20.995990250Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.037986 systemd[1]: Started cri-containerd-7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c.scope - libcontainer container 7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c. Aug 5 22:14:21.040107 containerd[1689]: time="2024-08-05T22:14:21.039623034Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:14:21.040107 containerd[1689]: time="2024-08-05T22:14:21.039776737Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.040107 containerd[1689]: time="2024-08-05T22:14:21.040055242Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:14:21.040745 containerd[1689]: time="2024-08-05T22:14:21.040148343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:14:21.074510 systemd[1]: Started cri-containerd-316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa.scope - libcontainer container 316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa. Aug 5 22:14:21.185997 containerd[1689]: time="2024-08-05T22:14:21.185131559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f5fc79cf4-885qp,Uid:43afebc7-f6be-425e-afe9-09804abc6819,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa\"" Aug 5 22:14:21.185997 containerd[1689]: time="2024-08-05T22:14:21.185515766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f5fc79cf4-8dlld,Uid:1ccf79ac-8dd2-4892-a2c5-46b90649fe8d,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c\"" Aug 5 22:14:21.190989 containerd[1689]: time="2024-08-05T22:14:21.189224433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:14:22.592635 systemd-networkd[1574]: cali12628613910: Gained IPv6LL Aug 5 22:14:22.593142 systemd-networkd[1574]: cali0dceb4a8948: Gained IPv6LL Aug 5 22:14:26.582533 containerd[1689]: time="2024-08-05T22:14:26.582480740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:26.585002 containerd[1689]: time="2024-08-05T22:14:26.584826583Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:14:26.590464 containerd[1689]: time="2024-08-05T22:14:26.590413184Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:26.596325 containerd[1689]: time="2024-08-05T22:14:26.596272490Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:26.597275 containerd[1689]: time="2024-08-05T22:14:26.597056204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 5.40779257s" Aug 5 22:14:26.597275 containerd[1689]: time="2024-08-05T22:14:26.597095304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:14:26.598599 containerd[1689]: time="2024-08-05T22:14:26.598410028Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:14:26.599765 containerd[1689]: time="2024-08-05T22:14:26.599583749Z" level=info msg="CreateContainer within sandbox \"7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:14:26.677723 containerd[1689]: time="2024-08-05T22:14:26.677670960Z" level=info msg="CreateContainer within sandbox \"7dc40f144e069e21bffec693509552c2722ce5e1bf6c16e7b8e21e1f74e7cc8c\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"e285bd3eb2cf38460cbbe42fa288f5439b7bcb2b25c331ce0b6df150e2d01194\"" Aug 5 22:14:26.678423 containerd[1689]: time="2024-08-05T22:14:26.678382773Z" level=info msg="StartContainer for \"e285bd3eb2cf38460cbbe42fa288f5439b7bcb2b25c331ce0b6df150e2d01194\"" Aug 5 22:14:26.716947 systemd[1]: Started cri-containerd-e285bd3eb2cf38460cbbe42fa288f5439b7bcb2b25c331ce0b6df150e2d01194.scope - libcontainer container e285bd3eb2cf38460cbbe42fa288f5439b7bcb2b25c331ce0b6df150e2d01194. Aug 5 22:14:26.767944 containerd[1689]: time="2024-08-05T22:14:26.767718286Z" level=info msg="StartContainer for \"e285bd3eb2cf38460cbbe42fa288f5439b7bcb2b25c331ce0b6df150e2d01194\" returns successfully" Aug 5 22:14:27.075458 containerd[1689]: time="2024-08-05T22:14:27.074453426Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:14:27.077181 containerd[1689]: time="2024-08-05T22:14:27.077124074Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Aug 5 22:14:27.080690 containerd[1689]: time="2024-08-05T22:14:27.080649038Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 482.201709ms" Aug 5 22:14:27.080817 containerd[1689]: time="2024-08-05T22:14:27.080693939Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:14:27.085934 containerd[1689]: time="2024-08-05T22:14:27.085899233Z" level=info msg="CreateContainer within sandbox \"316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:14:27.131124 containerd[1689]: time="2024-08-05T22:14:27.131078149Z" level=info msg="CreateContainer within sandbox \"316bcd3c99fab6bcdd4deaefd06cff2d7f65b55f09e4f8cd55e0d0cbf08748aa\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"154728d6a92e7a3836cae846d360891a0909f77f8f1a772b964b5cbef120b489\"" Aug 5 22:14:27.131926 containerd[1689]: time="2024-08-05T22:14:27.131637959Z" level=info msg="StartContainer for \"154728d6a92e7a3836cae846d360891a0909f77f8f1a772b964b5cbef120b489\"" Aug 5 22:14:27.160162 systemd[1]: Started cri-containerd-154728d6a92e7a3836cae846d360891a0909f77f8f1a772b964b5cbef120b489.scope - libcontainer container 154728d6a92e7a3836cae846d360891a0909f77f8f1a772b964b5cbef120b489. Aug 5 22:14:27.318312 containerd[1689]: time="2024-08-05T22:14:27.318259629Z" level=info msg="StartContainer for \"154728d6a92e7a3836cae846d360891a0909f77f8f1a772b964b5cbef120b489\" returns successfully" Aug 5 22:14:27.690825 kubelet[3200]: I0805 22:14:27.690440 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f5fc79cf4-8dlld" podStartSLOduration=5.28146236 podCreationTimestamp="2024-08-05 22:14:17 +0000 UTC" firstStartedPulling="2024-08-05 22:14:21.188835226 +0000 UTC m=+82.927270568" lastFinishedPulling="2024-08-05 22:14:26.597731516 +0000 UTC m=+88.336166958" observedRunningTime="2024-08-05 22:14:27.686895487 +0000 UTC m=+89.425330929" watchObservedRunningTime="2024-08-05 22:14:27.69035875 +0000 UTC m=+89.428794092" Aug 5 22:14:27.718705 kubelet[3200]: I0805 22:14:27.717940 3200 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f5fc79cf4-885qp" podStartSLOduration=4.827119753 podCreationTimestamp="2024-08-05 22:14:17 +0000 UTC" firstStartedPulling="2024-08-05 22:14:21.191353071 +0000 UTC m=+82.929788513" lastFinishedPulling="2024-08-05 22:14:27.082124365 +0000 UTC m=+88.820559807" observedRunningTime="2024-08-05 22:14:27.717331537 +0000 UTC m=+89.455766879" watchObservedRunningTime="2024-08-05 22:14:27.717891047 +0000 UTC m=+89.456326389" Aug 5 22:14:58.535096 systemd[1]: Started sshd@7-10.200.8.39:22-10.200.16.10:51526.service - OpenSSH per-connection server daemon (10.200.16.10:51526). Aug 5 22:14:59.180855 sshd[5699]: Accepted publickey for core from 10.200.16.10 port 51526 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:14:59.182438 sshd[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:14:59.187257 systemd-logind[1663]: New session 10 of user core. Aug 5 22:14:59.192947 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:14:59.706875 sshd[5699]: pam_unix(sshd:session): session closed for user core Aug 5 22:14:59.710500 systemd[1]: sshd@7-10.200.8.39:22-10.200.16.10:51526.service: Deactivated successfully. Aug 5 22:14:59.713546 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:14:59.715375 systemd-logind[1663]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:14:59.716559 systemd-logind[1663]: Removed session 10. Aug 5 22:15:04.829077 systemd[1]: Started sshd@8-10.200.8.39:22-10.200.16.10:54534.service - OpenSSH per-connection server daemon (10.200.16.10:54534). Aug 5 22:15:05.499462 sshd[5739]: Accepted publickey for core from 10.200.16.10 port 54534 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:05.501221 sshd[5739]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:05.506132 systemd-logind[1663]: New session 11 of user core. Aug 5 22:15:05.513172 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:15:06.006157 sshd[5739]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:06.010656 systemd[1]: sshd@8-10.200.8.39:22-10.200.16.10:54534.service: Deactivated successfully. Aug 5 22:15:06.012994 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:15:06.013776 systemd-logind[1663]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:15:06.014922 systemd-logind[1663]: Removed session 11. Aug 5 22:15:11.123097 systemd[1]: Started sshd@9-10.200.8.39:22-10.200.16.10:34586.service - OpenSSH per-connection server daemon (10.200.16.10:34586). Aug 5 22:15:11.763615 sshd[5766]: Accepted publickey for core from 10.200.16.10 port 34586 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:11.765457 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:11.769837 systemd-logind[1663]: New session 12 of user core. Aug 5 22:15:11.776943 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:15:12.276263 sshd[5766]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:12.279362 systemd[1]: sshd@9-10.200.8.39:22-10.200.16.10:34586.service: Deactivated successfully. Aug 5 22:15:12.281918 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:15:12.283845 systemd-logind[1663]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:15:12.285016 systemd-logind[1663]: Removed session 12. Aug 5 22:15:17.397098 systemd[1]: Started sshd@10-10.200.8.39:22-10.200.16.10:34588.service - OpenSSH per-connection server daemon (10.200.16.10:34588). Aug 5 22:15:18.037813 sshd[5802]: Accepted publickey for core from 10.200.16.10 port 34588 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:18.039489 sshd[5802]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:18.044185 systemd-logind[1663]: New session 13 of user core. Aug 5 22:15:18.048947 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:15:18.554935 sshd[5802]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:18.558521 systemd[1]: sshd@10-10.200.8.39:22-10.200.16.10:34588.service: Deactivated successfully. Aug 5 22:15:18.561218 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:15:18.562907 systemd-logind[1663]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:15:18.564430 systemd-logind[1663]: Removed session 13. Aug 5 22:15:18.673086 systemd[1]: Started sshd@11-10.200.8.39:22-10.200.16.10:34596.service - OpenSSH per-connection server daemon (10.200.16.10:34596). Aug 5 22:15:19.311940 sshd[5816]: Accepted publickey for core from 10.200.16.10 port 34596 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:19.313437 sshd[5816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:19.318264 systemd-logind[1663]: New session 14 of user core. Aug 5 22:15:19.327185 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:15:20.638468 sshd[5816]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:20.642190 systemd[1]: sshd@11-10.200.8.39:22-10.200.16.10:34596.service: Deactivated successfully. Aug 5 22:15:20.644619 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:15:20.646473 systemd-logind[1663]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:15:20.647755 systemd-logind[1663]: Removed session 14. Aug 5 22:15:20.768981 systemd[1]: Started sshd@12-10.200.8.39:22-10.200.16.10:39866.service - OpenSSH per-connection server daemon (10.200.16.10:39866). Aug 5 22:15:21.405219 sshd[5832]: Accepted publickey for core from 10.200.16.10 port 39866 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:21.407026 sshd[5832]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:21.411949 systemd-logind[1663]: New session 15 of user core. Aug 5 22:15:21.416077 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:15:21.916067 sshd[5832]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:21.919892 systemd[1]: sshd@12-10.200.8.39:22-10.200.16.10:39866.service: Deactivated successfully. Aug 5 22:15:21.922845 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:15:21.925221 systemd-logind[1663]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:15:21.927477 systemd-logind[1663]: Removed session 15. Aug 5 22:15:27.038098 systemd[1]: Started sshd@13-10.200.8.39:22-10.200.16.10:39872.service - OpenSSH per-connection server daemon (10.200.16.10:39872). Aug 5 22:15:27.687208 sshd[5862]: Accepted publickey for core from 10.200.16.10 port 39872 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:27.688882 sshd[5862]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:27.693747 systemd-logind[1663]: New session 16 of user core. Aug 5 22:15:27.700950 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:15:28.206239 sshd[5862]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:28.209422 systemd[1]: sshd@13-10.200.8.39:22-10.200.16.10:39872.service: Deactivated successfully. Aug 5 22:15:28.211973 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:15:28.213746 systemd-logind[1663]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:15:28.214930 systemd-logind[1663]: Removed session 16. Aug 5 22:15:28.327085 systemd[1]: Started sshd@14-10.200.8.39:22-10.200.16.10:39886.service - OpenSSH per-connection server daemon (10.200.16.10:39886). Aug 5 22:15:28.973157 sshd[5874]: Accepted publickey for core from 10.200.16.10 port 39886 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:28.974673 sshd[5874]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:28.978845 systemd-logind[1663]: New session 17 of user core. Aug 5 22:15:28.984292 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:15:29.567294 sshd[5874]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:29.571072 systemd[1]: sshd@14-10.200.8.39:22-10.200.16.10:39886.service: Deactivated successfully. Aug 5 22:15:29.573472 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:15:29.575258 systemd-logind[1663]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:15:29.576533 systemd-logind[1663]: Removed session 17. Aug 5 22:15:29.685313 systemd[1]: Started sshd@15-10.200.8.39:22-10.200.16.10:57298.service - OpenSSH per-connection server daemon (10.200.16.10:57298). Aug 5 22:15:30.322315 sshd[5885]: Accepted publickey for core from 10.200.16.10 port 57298 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:30.323931 sshd[5885]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:30.328945 systemd-logind[1663]: New session 18 of user core. Aug 5 22:15:30.335153 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:15:31.416719 sshd[5885]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:31.420514 systemd[1]: sshd@15-10.200.8.39:22-10.200.16.10:57298.service: Deactivated successfully. Aug 5 22:15:31.423070 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:15:31.424681 systemd-logind[1663]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:15:31.426201 systemd-logind[1663]: Removed session 18. Aug 5 22:15:31.537768 systemd[1]: Started sshd@16-10.200.8.39:22-10.200.16.10:57300.service - OpenSSH per-connection server daemon (10.200.16.10:57300). Aug 5 22:15:32.185933 sshd[5908]: Accepted publickey for core from 10.200.16.10 port 57300 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:32.188520 sshd[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:32.195854 systemd-logind[1663]: New session 19 of user core. Aug 5 22:15:32.201012 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:15:32.945965 sshd[5908]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:32.949602 systemd[1]: sshd@16-10.200.8.39:22-10.200.16.10:57300.service: Deactivated successfully. Aug 5 22:15:32.952137 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:15:32.954182 systemd-logind[1663]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:15:32.955274 systemd-logind[1663]: Removed session 19. Aug 5 22:15:33.082082 systemd[1]: Started sshd@17-10.200.8.39:22-10.200.16.10:57302.service - OpenSSH per-connection server daemon (10.200.16.10:57302). Aug 5 22:15:33.730751 sshd[5938]: Accepted publickey for core from 10.200.16.10 port 57302 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:33.732065 sshd[5938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:33.737364 systemd-logind[1663]: New session 20 of user core. Aug 5 22:15:33.742070 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:15:34.247988 sshd[5938]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:34.251664 systemd[1]: sshd@17-10.200.8.39:22-10.200.16.10:57302.service: Deactivated successfully. Aug 5 22:15:34.254564 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:15:34.256511 systemd-logind[1663]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:15:34.257846 systemd-logind[1663]: Removed session 20. Aug 5 22:15:39.376137 systemd[1]: Started sshd@18-10.200.8.39:22-10.200.16.10:37934.service - OpenSSH per-connection server daemon (10.200.16.10:37934). Aug 5 22:15:40.018570 sshd[5957]: Accepted publickey for core from 10.200.16.10 port 37934 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:40.020381 sshd[5957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:40.026216 systemd-logind[1663]: New session 21 of user core. Aug 5 22:15:40.031978 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:15:40.526850 sshd[5957]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:40.530652 systemd[1]: sshd@18-10.200.8.39:22-10.200.16.10:37934.service: Deactivated successfully. Aug 5 22:15:40.533564 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:15:40.535530 systemd-logind[1663]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:15:40.537252 systemd-logind[1663]: Removed session 21. Aug 5 22:15:45.649094 systemd[1]: Started sshd@19-10.200.8.39:22-10.200.16.10:37942.service - OpenSSH per-connection server daemon (10.200.16.10:37942). Aug 5 22:15:46.308847 sshd[6000]: Accepted publickey for core from 10.200.16.10 port 37942 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:46.310730 sshd[6000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:46.316351 systemd-logind[1663]: New session 22 of user core. Aug 5 22:15:46.321240 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:15:46.820041 sshd[6000]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:46.824886 systemd[1]: sshd@19-10.200.8.39:22-10.200.16.10:37942.service: Deactivated successfully. Aug 5 22:15:46.827146 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:15:46.828070 systemd-logind[1663]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:15:46.829129 systemd-logind[1663]: Removed session 22. Aug 5 22:15:51.942078 systemd[1]: Started sshd@20-10.200.8.39:22-10.200.16.10:37006.service - OpenSSH per-connection server daemon (10.200.16.10:37006). Aug 5 22:15:52.589942 sshd[6032]: Accepted publickey for core from 10.200.16.10 port 37006 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:52.591649 sshd[6032]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:52.596775 systemd-logind[1663]: New session 23 of user core. Aug 5 22:15:52.604164 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:15:53.095915 sshd[6032]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:53.100779 systemd[1]: sshd@20-10.200.8.39:22-10.200.16.10:37006.service: Deactivated successfully. Aug 5 22:15:53.103288 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:15:53.104196 systemd-logind[1663]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:15:53.105325 systemd-logind[1663]: Removed session 23. Aug 5 22:15:58.220645 systemd[1]: Started sshd@21-10.200.8.39:22-10.200.16.10:37020.service - OpenSSH per-connection server daemon (10.200.16.10:37020). Aug 5 22:15:58.969985 sshd[6050]: Accepted publickey for core from 10.200.16.10 port 37020 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:15:58.971660 sshd[6050]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:15:58.975827 systemd-logind[1663]: New session 24 of user core. Aug 5 22:15:58.980933 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:15:59.477682 sshd[6050]: pam_unix(sshd:session): session closed for user core Aug 5 22:15:59.481674 systemd[1]: sshd@21-10.200.8.39:22-10.200.16.10:37020.service: Deactivated successfully. Aug 5 22:15:59.484090 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:15:59.484979 systemd-logind[1663]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:15:59.486015 systemd-logind[1663]: Removed session 24. Aug 5 22:16:04.597124 systemd[1]: Started sshd@22-10.200.8.39:22-10.200.16.10:45338.service - OpenSSH per-connection server daemon (10.200.16.10:45338). Aug 5 22:16:05.236009 sshd[6087]: Accepted publickey for core from 10.200.16.10 port 45338 ssh2: RSA SHA256:jDHHhcNhhDUZ5pWlaZmqbH8BGBKds8FI3MCEwU7TQfs Aug 5 22:16:05.238052 sshd[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:16:05.243123 systemd-logind[1663]: New session 25 of user core. Aug 5 22:16:05.249209 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:16:05.743380 sshd[6087]: pam_unix(sshd:session): session closed for user core Aug 5 22:16:05.746504 systemd[1]: sshd@22-10.200.8.39:22-10.200.16.10:45338.service: Deactivated successfully. Aug 5 22:16:05.748856 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:16:05.750426 systemd-logind[1663]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:16:05.751584 systemd-logind[1663]: Removed session 25.