Aug 5 22:21:52.051548 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:21:52.051583 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:21:52.051598 kernel: BIOS-provided physical RAM map: Aug 5 22:21:52.051609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 5 22:21:52.051620 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 5 22:21:52.051631 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 5 22:21:52.051644 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 5 22:21:52.051658 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 5 22:21:52.051670 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 5 22:21:52.051682 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 5 22:21:52.051694 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 5 22:21:52.051705 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 5 22:21:52.051716 kernel: printk: bootconsole [earlyser0] enabled Aug 5 22:21:52.051729 kernel: NX (Execute Disable) protection: active Aug 5 22:21:52.051746 kernel: APIC: Static calls initialized Aug 5 22:21:52.051758 kernel: efi: EFI v2.7 by Microsoft Aug 5 22:21:52.051770 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Aug 5 22:21:52.051782 kernel: SMBIOS 3.1.0 present. Aug 5 22:21:52.051795 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 5 22:21:52.051808 kernel: Hypervisor detected: Microsoft Hyper-V Aug 5 22:21:52.051821 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 5 22:21:52.051834 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Aug 5 22:21:52.051847 kernel: Hyper-V: Nested features: 0x1e0101 Aug 5 22:21:52.051860 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 5 22:21:52.051896 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 5 22:21:52.051907 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:21:52.051931 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:21:52.051944 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 5 22:21:52.051956 kernel: tsc: Detected 2593.906 MHz processor Aug 5 22:21:52.051968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:21:52.051981 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:21:52.051993 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 5 22:21:52.052005 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 5 22:21:52.052030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:21:52.052042 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 5 22:21:52.052053 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 5 22:21:52.052064 kernel: Using GB pages for direct mapping Aug 5 22:21:52.052075 kernel: Secure boot disabled Aug 5 22:21:52.052086 kernel: ACPI: Early table checksum verification disabled Aug 5 22:21:52.052099 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 5 22:21:52.052117 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052132 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052145 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 5 22:21:52.052158 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 5 22:21:52.052172 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052185 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052214 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052231 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052245 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052259 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052272 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052286 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 5 22:21:52.052300 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 5 22:21:52.052314 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 5 22:21:52.052327 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 5 22:21:52.052344 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 5 22:21:52.052358 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 5 22:21:52.052371 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 5 22:21:52.052384 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 5 22:21:52.052398 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 5 22:21:52.052412 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 5 22:21:52.052426 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:21:52.052439 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:21:52.052452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 5 22:21:52.052470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 5 22:21:52.052483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 5 22:21:52.052497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 5 22:21:52.052511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 5 22:21:52.052525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 5 22:21:52.052538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 5 22:21:52.052552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 5 22:21:52.052566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 5 22:21:52.052579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 5 22:21:52.052596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 5 22:21:52.052609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 5 22:21:52.052623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 5 22:21:52.052637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 5 22:21:52.052650 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 5 22:21:52.052664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 5 22:21:52.052678 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 5 22:21:52.052692 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 5 22:21:52.052706 kernel: Zone ranges: Aug 5 22:21:52.052722 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:21:52.052736 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 5 22:21:52.052749 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:21:52.052763 kernel: Movable zone start for each node Aug 5 22:21:52.052776 kernel: Early memory node ranges Aug 5 22:21:52.052789 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 5 22:21:52.052803 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 5 22:21:52.052816 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 5 22:21:52.052830 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:21:52.052847 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 5 22:21:52.055746 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:21:52.055769 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 5 22:21:52.055783 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 5 22:21:52.055797 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 5 22:21:52.055811 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 5 22:21:52.055825 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:21:52.055839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:21:52.055853 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:21:52.055888 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 5 22:21:52.055903 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:21:52.055916 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 5 22:21:52.055931 kernel: Booting paravirtualized kernel on Hyper-V Aug 5 22:21:52.055946 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:21:52.055960 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:21:52.055974 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:21:52.055988 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:21:52.056001 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:21:52.056019 kernel: Hyper-V: PV spinlocks enabled Aug 5 22:21:52.056032 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:21:52.056048 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:21:52.056063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:21:52.056077 kernel: random: crng init done Aug 5 22:21:52.056091 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 5 22:21:52.056104 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:21:52.056118 kernel: Fallback order for Node 0: 0 Aug 5 22:21:52.056136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 5 22:21:52.056160 kernel: Policy zone: Normal Aug 5 22:21:52.056177 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:21:52.056191 kernel: software IO TLB: area num 2. Aug 5 22:21:52.056206 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 316268K reserved, 0K cma-reserved) Aug 5 22:21:52.056221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:21:52.056236 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:21:52.056250 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:21:52.056264 kernel: Dynamic Preempt: voluntary Aug 5 22:21:52.056279 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:21:52.056294 kernel: rcu: RCU event tracing is enabled. Aug 5 22:21:52.056311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:21:52.056326 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:21:52.056341 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:21:52.056356 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:21:52.056369 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:21:52.056388 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:21:52.056402 kernel: Using NULL legacy PIC Aug 5 22:21:52.056418 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 5 22:21:52.056434 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:21:52.056449 kernel: Console: colour dummy device 80x25 Aug 5 22:21:52.056464 kernel: printk: console [tty1] enabled Aug 5 22:21:52.056479 kernel: printk: console [ttyS0] enabled Aug 5 22:21:52.056494 kernel: printk: bootconsole [earlyser0] disabled Aug 5 22:21:52.056509 kernel: ACPI: Core revision 20230628 Aug 5 22:21:52.056524 kernel: Failed to register legacy timer interrupt Aug 5 22:21:52.056543 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:21:52.056558 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 5 22:21:52.056574 kernel: Hyper-V: Using IPI hypercalls Aug 5 22:21:52.056589 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 5 22:21:52.056616 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 5 22:21:52.056631 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 5 22:21:52.056646 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 5 22:21:52.056660 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 5 22:21:52.056675 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 5 22:21:52.056692 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 5 22:21:52.056705 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:21:52.056719 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:21:52.056733 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:21:52.056746 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:21:52.056759 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:21:52.056773 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:21:52.056787 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:21:52.056799 kernel: RETBleed: Vulnerable Aug 5 22:21:52.056816 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:21:52.056829 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:21:52.056842 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:21:52.056855 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:21:52.056890 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:21:52.056918 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:21:52.056931 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:21:52.056944 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:21:52.056957 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:21:52.056970 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:21:52.056983 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:21:52.057000 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 5 22:21:52.057014 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 5 22:21:52.057027 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 5 22:21:52.057041 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 5 22:21:52.057055 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:21:52.057068 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:21:52.057081 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:21:52.057096 kernel: SELinux: Initializing. Aug 5 22:21:52.057110 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.057125 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.057155 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:21:52.057169 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:52.057187 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:52.057202 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:52.057217 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:21:52.057230 kernel: signal: max sigframe size: 3632 Aug 5 22:21:52.057245 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:21:52.057259 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:21:52.057274 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:21:52.057288 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:21:52.057302 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:21:52.057320 kernel: .... node #0, CPUs: #1 Aug 5 22:21:52.057334 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 5 22:21:52.057349 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:21:52.057364 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:21:52.057378 kernel: smpboot: Max logical packages: 1 Aug 5 22:21:52.057393 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 5 22:21:52.057407 kernel: devtmpfs: initialized Aug 5 22:21:52.057422 kernel: x86/mm: Memory block size: 128MB Aug 5 22:21:52.057439 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 5 22:21:52.057454 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:21:52.057469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:21:52.057483 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:21:52.057498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:21:52.057512 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:21:52.057527 kernel: audit: type=2000 audit(1722896510.028:1): state=initialized audit_enabled=0 res=1 Aug 5 22:21:52.057541 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:21:52.057556 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:21:52.057573 kernel: cpuidle: using governor menu Aug 5 22:21:52.057588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:21:52.057602 kernel: dca service started, version 1.12.1 Aug 5 22:21:52.057617 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 5 22:21:52.057632 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:21:52.057647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:21:52.057662 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:21:52.057676 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:21:52.057691 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:21:52.057709 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:21:52.057723 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:21:52.057738 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:21:52.057753 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:21:52.057767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:21:52.057782 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:21:52.057797 kernel: ACPI: Interpreter enabled Aug 5 22:21:52.057812 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:21:52.057826 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:21:52.057844 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:21:52.057858 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 5 22:21:52.057883 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 5 22:21:52.057896 kernel: iommu: Default domain type: Translated Aug 5 22:21:52.057910 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:21:52.057923 kernel: efivars: Registered efivars operations Aug 5 22:21:52.057938 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:21:52.057955 kernel: PCI: System does not support PCI Aug 5 22:21:52.057970 kernel: vgaarb: loaded Aug 5 22:21:52.057988 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 5 22:21:52.058004 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:21:52.058018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:21:52.058032 kernel: pnp: PnP ACPI init Aug 5 22:21:52.058046 kernel: pnp: PnP ACPI: found 3 devices Aug 5 22:21:52.058061 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:21:52.058075 kernel: NET: Registered PF_INET protocol family Aug 5 22:21:52.058089 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:21:52.058105 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 5 22:21:52.058122 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:21:52.058136 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:21:52.058150 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 5 22:21:52.058165 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 5 22:21:52.058180 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.058195 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.058209 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:21:52.058224 kernel: NET: Registered PF_XDP protocol family Aug 5 22:21:52.058238 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:21:52.058256 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 5 22:21:52.058270 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Aug 5 22:21:52.058284 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:21:52.058299 kernel: Initialise system trusted keyrings Aug 5 22:21:52.058313 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 5 22:21:52.058328 kernel: Key type asymmetric registered Aug 5 22:21:52.058343 kernel: Asymmetric key parser 'x509' registered Aug 5 22:21:52.058356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:21:52.058370 kernel: io scheduler mq-deadline registered Aug 5 22:21:52.058387 kernel: io scheduler kyber registered Aug 5 22:21:52.058400 kernel: io scheduler bfq registered Aug 5 22:21:52.058413 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:21:52.058428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:21:52.058443 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:21:52.058458 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 5 22:21:52.058473 kernel: i8042: PNP: No PS/2 controller found. Aug 5 22:21:52.058648 kernel: rtc_cmos 00:02: registered as rtc0 Aug 5 22:21:52.061645 kernel: rtc_cmos 00:02: setting system clock to 2024-08-05T22:21:51 UTC (1722896511) Aug 5 22:21:52.061774 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 5 22:21:52.061794 kernel: intel_pstate: CPU model not supported Aug 5 22:21:52.061810 kernel: efifb: probing for efifb Aug 5 22:21:52.061826 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 5 22:21:52.061841 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 5 22:21:52.061856 kernel: efifb: scrolling: redraw Aug 5 22:21:52.064527 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 5 22:21:52.064552 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:21:52.064568 kernel: fb0: EFI VGA frame buffer device Aug 5 22:21:52.064583 kernel: pstore: Using crash dump compression: deflate Aug 5 22:21:52.064599 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:21:52.064614 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:21:52.064629 kernel: Segment Routing with IPv6 Aug 5 22:21:52.064644 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:21:52.064660 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:21:52.064675 kernel: Key type dns_resolver registered Aug 5 22:21:52.064690 kernel: IPI shorthand broadcast: enabled Aug 5 22:21:52.064708 kernel: sched_clock: Marking stable (837003300, 50535400)->(1106464600, -218925900) Aug 5 22:21:52.064723 kernel: registered taskstats version 1 Aug 5 22:21:52.064738 kernel: Loading compiled-in X.509 certificates Aug 5 22:21:52.064753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:21:52.064768 kernel: Key type .fscrypt registered Aug 5 22:21:52.064783 kernel: Key type fscrypt-provisioning registered Aug 5 22:21:52.064798 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:21:52.064813 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:21:52.064831 kernel: ima: No architecture policies found Aug 5 22:21:52.064846 kernel: clk: Disabling unused clocks Aug 5 22:21:52.064873 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:21:52.064887 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:21:52.064907 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:21:52.064921 kernel: Run /init as init process Aug 5 22:21:52.064934 kernel: with arguments: Aug 5 22:21:52.064948 kernel: /init Aug 5 22:21:52.064963 kernel: with environment: Aug 5 22:21:52.064981 kernel: HOME=/ Aug 5 22:21:52.064995 kernel: TERM=linux Aug 5 22:21:52.065010 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:21:52.065027 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:21:52.065045 systemd[1]: Detected virtualization microsoft. Aug 5 22:21:52.065061 systemd[1]: Detected architecture x86-64. Aug 5 22:21:52.065076 systemd[1]: Running in initrd. Aug 5 22:21:52.065090 systemd[1]: No hostname configured, using default hostname. Aug 5 22:21:52.065108 systemd[1]: Hostname set to . Aug 5 22:21:52.065124 systemd[1]: Initializing machine ID from random generator. Aug 5 22:21:52.065138 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:21:52.065154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:52.065169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:52.065185 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:21:52.065201 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:21:52.065217 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:21:52.065235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:21:52.065252 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:21:52.065268 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:21:52.065284 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:52.065299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:52.065314 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:21:52.065329 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:21:52.065347 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:21:52.065361 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:21:52.065376 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:21:52.065391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:21:52.065406 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:21:52.065421 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:21:52.065436 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:52.065451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:52.065468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:52.065483 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:21:52.065498 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:21:52.065512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:21:52.065528 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:21:52.065542 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:21:52.065557 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:21:52.065572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:21:52.065587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:52.065604 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:21:52.065641 systemd-journald[176]: Collecting audit messages is disabled. Aug 5 22:21:52.065674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:52.065689 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:21:52.065710 systemd-journald[176]: Journal started Aug 5 22:21:52.065754 systemd-journald[176]: Runtime Journal (/run/log/journal/e8f58c2785ef455cbff96b089cae3dea) is 8.0M, max 158.8M, 150.8M free. Aug 5 22:21:52.059934 systemd-modules-load[177]: Inserted module 'overlay' Aug 5 22:21:52.076942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:21:52.096031 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:21:52.096765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:52.102311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:21:52.115897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:21:52.120608 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 5 22:21:52.122779 kernel: Bridge firewalling registered Aug 5 22:21:52.123013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:52.125991 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:21:52.130497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:21:52.144194 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:52.154559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:21:52.155660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:52.158348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:52.164691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:52.175011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:21:52.183052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:52.194040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:21:52.202537 dracut-cmdline[212]: dracut-dracut-053 Aug 5 22:21:52.205913 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:21:52.229475 systemd-resolved[218]: Positive Trust Anchors: Aug 5 22:21:52.229496 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:21:52.229547 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:21:52.251821 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 5 22:21:52.255055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:21:52.257546 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:52.304894 kernel: SCSI subsystem initialized Aug 5 22:21:52.315886 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:21:52.328889 kernel: iscsi: registered transport (tcp) Aug 5 22:21:52.354172 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:21:52.354256 kernel: QLogic iSCSI HBA Driver Aug 5 22:21:52.389585 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:21:52.399264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:21:52.428836 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:21:52.428927 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:21:52.431798 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:21:52.475889 kernel: raid6: avx512x4 gen() 18503 MB/s Aug 5 22:21:52.493875 kernel: raid6: avx512x2 gen() 18323 MB/s Aug 5 22:21:52.512873 kernel: raid6: avx512x1 gen() 18297 MB/s Aug 5 22:21:52.531878 kernel: raid6: avx2x4 gen() 18325 MB/s Aug 5 22:21:52.550873 kernel: raid6: avx2x2 gen() 18367 MB/s Aug 5 22:21:52.570705 kernel: raid6: avx2x1 gen() 13987 MB/s Aug 5 22:21:52.570743 kernel: raid6: using algorithm avx512x4 gen() 18503 MB/s Aug 5 22:21:52.592336 kernel: raid6: .... xor() 7970 MB/s, rmw enabled Aug 5 22:21:52.592375 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:21:52.617888 kernel: xor: automatically using best checksumming function avx Aug 5 22:21:52.782890 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:21:52.792469 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:21:52.802025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:52.818397 systemd-udevd[398]: Using default interface naming scheme 'v255'. Aug 5 22:21:52.824564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:52.839519 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:21:52.850586 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 5 22:21:52.876987 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:21:52.885138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:21:52.922558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:52.936453 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:21:52.973238 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:21:52.979428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:21:52.985893 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:52.991542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:21:53.001256 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:21:53.015882 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:21:53.017762 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:21:53.020502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:53.027082 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:53.029680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:53.029801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:53.032408 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:53.049649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:53.053183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:21:53.076278 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:21:53.076335 kernel: AES CTR mode by8 optimization enabled Aug 5 22:21:53.077024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:53.080977 kernel: hv_vmbus: Vmbus version:5.2 Aug 5 22:21:53.077595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:53.092898 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 5 22:21:53.093152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:53.114667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:53.128905 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 5 22:21:53.130172 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:53.161936 kernel: hv_vmbus: registering driver hv_storvsc Aug 5 22:21:53.165607 kernel: scsi host0: storvsc_host_t Aug 5 22:21:53.165659 kernel: hv_vmbus: registering driver hv_netvsc Aug 5 22:21:53.177738 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 5 22:21:53.177820 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 5 22:21:53.177842 kernel: scsi host1: storvsc_host_t Aug 5 22:21:53.177885 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 5 22:21:53.188432 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:21:53.188474 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 5 22:21:53.186717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:53.214395 kernel: PTP clock support registered Aug 5 22:21:53.214447 kernel: hv_vmbus: registering driver hid_hyperv Aug 5 22:21:53.231905 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 5 22:21:53.231958 kernel: hv_utils: Registering HyperV Utility Driver Aug 5 22:21:53.231978 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 5 22:21:53.232194 kernel: hv_vmbus: registering driver hv_utils Aug 5 22:21:53.236628 kernel: hv_utils: Heartbeat IC version 3.0 Aug 5 22:21:53.236695 kernel: hv_utils: Shutdown IC version 3.2 Aug 5 22:21:53.238596 kernel: hv_utils: TimeSync IC version 4.0 Aug 5 22:21:53.196626 systemd-resolved[218]: Clock change detected. Flushing caches. Aug 5 22:21:53.205015 systemd-journald[176]: Time jumped backwards, rotating. Aug 5 22:21:53.217462 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 5 22:21:53.223005 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:21:53.223019 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 5 22:21:53.237211 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 5 22:21:53.237406 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 5 22:21:53.238927 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 5 22:21:53.239094 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 5 22:21:53.239266 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 5 22:21:53.239422 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:21:53.239437 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 5 22:21:53.321539 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: VF slot 1 added Aug 5 22:21:53.339469 kernel: hv_vmbus: registering driver hv_pci Aug 5 22:21:53.344764 kernel: hv_pci 8e2c3fc4-a020-4cee-af9a-87cfed70501e: PCI VMBus probing: Using version 0x10004 Aug 5 22:21:53.393739 kernel: hv_pci 8e2c3fc4-a020-4cee-af9a-87cfed70501e: PCI host bridge to bus a020:00 Aug 5 22:21:53.393891 kernel: pci_bus a020:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 5 22:21:53.394023 kernel: pci_bus a020:00: No busn resource found for root bus, will use [bus 00-ff] Aug 5 22:21:53.394147 kernel: pci a020:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 5 22:21:53.394277 kernel: pci a020:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:21:53.394395 kernel: pci a020:00:02.0: enabling Extended Tags Aug 5 22:21:53.394537 kernel: pci a020:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a020:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 5 22:21:53.394657 kernel: pci_bus a020:00: busn_res: [bus 00-ff] end is updated to 00 Aug 5 22:21:53.394765 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Aug 5 22:21:53.394778 kernel: pci a020:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:21:53.383675 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 5 22:21:53.421368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 22:21:53.432582 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (442) Aug 5 22:21:53.460749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 5 22:21:53.484082 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 5 22:21:53.487052 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 5 22:21:53.502690 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:21:53.538200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:21:53.696635 kernel: mlx5_core a020:00:02.0: enabling device (0000 -> 0002) Aug 5 22:21:53.925942 kernel: mlx5_core a020:00:02.0: firmware version: 14.30.1284 Aug 5 22:21:53.926172 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: VF registering: eth1 Aug 5 22:21:53.926341 kernel: mlx5_core a020:00:02.0 eth1: joined to eth0 Aug 5 22:21:53.926548 kernel: mlx5_core a020:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 5 22:21:53.932483 kernel: mlx5_core a020:00:02.0 enP40992s1: renamed from eth1 Aug 5 22:21:54.553268 disk-uuid[596]: The operation has completed successfully. Aug 5 22:21:54.556606 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:21:54.647989 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:21:54.648101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:21:54.662644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:21:54.667976 sh[691]: Success Aug 5 22:21:54.688768 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:21:54.776194 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:21:54.795568 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:21:54.803528 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:21:54.836484 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:21:54.836529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:54.841954 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:21:54.844909 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:21:54.847553 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:21:54.905781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:21:54.910436 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:21:54.920613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:21:54.925602 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:21:54.939700 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:54.939746 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:54.941395 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:21:54.952098 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:21:54.963653 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:54.963238 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:21:54.973735 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:21:54.986608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:21:55.028930 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:21:55.036725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:21:55.058516 systemd-networkd[875]: lo: Link UP Aug 5 22:21:55.058524 systemd-networkd[875]: lo: Gained carrier Aug 5 22:21:55.060792 systemd-networkd[875]: Enumeration completed Aug 5 22:21:55.061290 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:21:55.062762 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:55.062765 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:21:55.064471 systemd[1]: Reached target network.target - Network. Aug 5 22:21:55.117477 kernel: mlx5_core a020:00:02.0 enP40992s1: Link up Aug 5 22:21:55.150488 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: Data path switched to VF: enP40992s1 Aug 5 22:21:55.151328 systemd-networkd[875]: enP40992s1: Link UP Aug 5 22:21:55.151574 systemd-networkd[875]: eth0: Link UP Aug 5 22:21:55.151834 systemd-networkd[875]: eth0: Gained carrier Aug 5 22:21:55.151862 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:55.161777 systemd-networkd[875]: enP40992s1: Gained carrier Aug 5 22:21:55.203574 systemd-networkd[875]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 5 22:21:55.402628 ignition[796]: Ignition 2.19.0 Aug 5 22:21:55.402643 ignition[796]: Stage: fetch-offline Aug 5 22:21:55.404405 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:21:55.402691 ignition[796]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.402702 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.402830 ignition[796]: parsed url from cmdline: "" Aug 5 22:21:55.402836 ignition[796]: no config URL provided Aug 5 22:21:55.402843 ignition[796]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:21:55.402854 ignition[796]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:21:55.402861 ignition[796]: failed to fetch config: resource requires networking Aug 5 22:21:55.403085 ignition[796]: Ignition finished successfully Aug 5 22:21:55.434612 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:21:55.448853 ignition[884]: Ignition 2.19.0 Aug 5 22:21:55.448863 ignition[884]: Stage: fetch Aug 5 22:21:55.449073 ignition[884]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.449085 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.449179 ignition[884]: parsed url from cmdline: "" Aug 5 22:21:55.449183 ignition[884]: no config URL provided Aug 5 22:21:55.449187 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:21:55.449194 ignition[884]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:21:55.449212 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 5 22:21:55.546105 ignition[884]: GET result: OK Aug 5 22:21:55.546202 ignition[884]: config has been read from IMDS userdata Aug 5 22:21:55.546231 ignition[884]: parsing config with SHA512: 051230907dc2555536046337eed6e1776274d107febcf782fc5895610b726bdbd6843f9b12a06f535c87877e53a0362a0b296e3643fdc48aa65d3efe0c09f581 Aug 5 22:21:55.554920 unknown[884]: fetched base config from "system" Aug 5 22:21:55.555063 unknown[884]: fetched base config from "system" Aug 5 22:21:55.555649 ignition[884]: fetch: fetch complete Aug 5 22:21:55.555069 unknown[884]: fetched user config from "azure" Aug 5 22:21:55.555654 ignition[884]: fetch: fetch passed Aug 5 22:21:55.559930 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:21:55.555700 ignition[884]: Ignition finished successfully Aug 5 22:21:55.573632 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:21:55.589766 ignition[891]: Ignition 2.19.0 Aug 5 22:21:55.589777 ignition[891]: Stage: kargs Aug 5 22:21:55.589996 ignition[891]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.591898 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:21:55.590009 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.590898 ignition[891]: kargs: kargs passed Aug 5 22:21:55.590943 ignition[891]: Ignition finished successfully Aug 5 22:21:55.606807 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:21:55.622214 ignition[898]: Ignition 2.19.0 Aug 5 22:21:55.622224 ignition[898]: Stage: disks Aug 5 22:21:55.622442 ignition[898]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.624312 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:21:55.622470 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.623404 ignition[898]: disks: disks passed Aug 5 22:21:55.623444 ignition[898]: Ignition finished successfully Aug 5 22:21:55.637140 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:21:55.642035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:21:55.645119 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:21:55.652198 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:21:55.656751 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:21:55.664892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:21:55.689398 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 5 22:21:55.692474 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:21:55.705392 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:21:55.806604 kernel: EXT4-fs (sda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:21:55.807173 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:21:55.811353 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:21:55.827543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:21:55.834949 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:21:55.844487 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (918) Aug 5 22:21:55.853491 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:55.853546 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:55.853570 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:21:55.854679 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 5 22:21:55.862877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:21:55.867702 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:21:55.863529 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:21:55.871776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:21:55.876403 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:21:55.883645 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:21:56.091219 coreos-metadata[920]: Aug 05 22:21:56.091 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 22:21:56.096516 coreos-metadata[920]: Aug 05 22:21:56.096 INFO Fetch successful Aug 5 22:21:56.099338 coreos-metadata[920]: Aug 05 22:21:56.096 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 5 22:21:56.114430 coreos-metadata[920]: Aug 05 22:21:56.114 INFO Fetch successful Aug 5 22:21:56.118385 coreos-metadata[920]: Aug 05 22:21:56.118 INFO wrote hostname ci-4012.1.0-a-bfd2eb4520 to /sysroot/etc/hostname Aug 5 22:21:56.123801 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:21:56.149835 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:21:56.158626 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:21:56.165954 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:21:56.171041 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:21:56.395624 systemd-networkd[875]: eth0: Gained IPv6LL Aug 5 22:21:56.450345 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:21:56.459553 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:21:56.465975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:21:56.471389 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:21:56.477954 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:56.504669 ignition[1035]: INFO : Ignition 2.19.0 Aug 5 22:21:56.504669 ignition[1035]: INFO : Stage: mount Aug 5 22:21:56.504669 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:56.504669 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:56.504669 ignition[1035]: INFO : mount: mount passed Aug 5 22:21:56.504669 ignition[1035]: INFO : Ignition finished successfully Aug 5 22:21:56.504507 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:21:56.521066 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:21:56.527486 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:21:56.540654 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:21:56.549467 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) Aug 5 22:21:56.553473 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:56.553517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:56.557392 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:21:56.562468 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:21:56.564021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:21:56.587848 systemd-networkd[875]: enP40992s1: Gained IPv6LL Aug 5 22:21:56.590146 ignition[1065]: INFO : Ignition 2.19.0 Aug 5 22:21:56.590146 ignition[1065]: INFO : Stage: files Aug 5 22:21:56.590146 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:56.590146 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:56.590146 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:21:56.600901 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:21:56.600901 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:21:56.607224 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:21:56.607224 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:21:56.607224 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:21:56.605849 unknown[1065]: wrote ssh authorized keys file for user: core Aug 5 22:21:56.622377 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:21:56.622377 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:21:56.814474 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:21:56.874425 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:21:57.382373 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:21:57.523002 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:57.523002 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:21:57.531258 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:21:57.535591 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:21:57.535591 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:21:57.535591 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:21:57.545633 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:21:57.548806 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:21:57.555084 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:21:57.555084 ignition[1065]: INFO : files: files passed Aug 5 22:21:57.555084 ignition[1065]: INFO : Ignition finished successfully Aug 5 22:21:57.550630 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:21:57.563635 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:21:57.569784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:21:57.577853 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:21:57.577956 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:21:57.592900 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:57.592900 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:57.601652 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:57.606506 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:21:57.609397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:21:57.624786 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:21:57.650636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:21:57.650746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:21:57.656694 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:21:57.663975 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:21:57.666586 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:21:57.672624 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:21:57.684930 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:21:57.692600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:21:57.702882 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:57.707871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:57.713000 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:21:57.717000 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:21:57.717148 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:21:57.724371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:21:57.728864 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:21:57.730803 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:21:57.737319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:21:57.740088 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:21:57.747367 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:21:57.749789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:21:57.754551 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:21:57.759597 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:21:57.766046 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:21:57.767810 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:21:57.767928 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:21:57.776390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:57.781083 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:57.783812 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:21:57.786157 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:57.788971 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:21:57.789110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:21:57.800248 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:21:57.800429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:21:57.805550 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:21:57.810373 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:21:57.814654 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 5 22:21:57.814789 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:21:57.831232 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:21:57.833268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:21:57.833431 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:57.850414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:21:57.852586 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:21:57.860102 ignition[1118]: INFO : Ignition 2.19.0 Aug 5 22:21:57.860102 ignition[1118]: INFO : Stage: umount Aug 5 22:21:57.860102 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:57.860102 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:57.860102 ignition[1118]: INFO : umount: umount passed Aug 5 22:21:57.860102 ignition[1118]: INFO : Ignition finished successfully Aug 5 22:21:57.852748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:57.855810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:21:57.856053 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:21:57.870785 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:21:57.870874 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:21:57.874758 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:21:57.874846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:21:57.880165 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:21:57.880235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:21:57.885419 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:21:57.885540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:21:57.889374 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:21:57.889423 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:21:57.891738 systemd[1]: Stopped target network.target - Network. Aug 5 22:21:57.896179 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:21:57.896239 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:21:57.900687 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:21:57.901145 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:21:57.904511 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:57.938203 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:21:57.940161 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:21:57.944000 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:21:57.944058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:21:57.946287 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:21:57.950025 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:21:57.958068 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:21:57.958137 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:21:57.962185 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:21:57.962237 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:21:57.965089 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:21:57.969321 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:21:57.970522 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:21:57.977576 systemd-networkd[875]: eth0: DHCPv6 lease lost Aug 5 22:21:57.979528 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:21:57.979654 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:21:57.986463 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:21:57.986602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:21:57.991180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:21:57.991257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:58.012633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:21:58.016839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:21:58.016911 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:21:58.024761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:21:58.024829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:58.033074 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:21:58.033139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:58.037791 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:21:58.037849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:58.047034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:58.071155 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:21:58.071327 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:58.076590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:21:58.076640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:58.087831 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:21:58.087885 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:58.094795 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:21:58.094863 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:21:58.105251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:21:58.107332 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: Data path switched from VF: enP40992s1 Aug 5 22:21:58.105320 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:21:58.110083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:21:58.110130 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:58.122665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:21:58.128255 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:21:58.128330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:58.131336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:58.131403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:58.134352 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:21:58.134441 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:21:58.149603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:21:58.149702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:21:58.917189 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:21:58.917321 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:21:58.920380 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:21:58.925224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:21:58.925281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:21:58.942624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:21:58.951915 systemd[1]: Switching root. Aug 5 22:21:59.033710 systemd-journald[176]: Journal stopped Aug 5 22:21:52.051548 kernel: Linux version 6.6.43-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20240210 p14) 13.2.1 20240210, GNU ld (Gentoo 2.41 p5) 2.41.0) #1 SMP PREEMPT_DYNAMIC Mon Aug 5 20:36:22 -00 2024 Aug 5 22:21:52.051583 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:21:52.051598 kernel: BIOS-provided physical RAM map: Aug 5 22:21:52.051609 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Aug 5 22:21:52.051620 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Aug 5 22:21:52.051631 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Aug 5 22:21:52.051644 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Aug 5 22:21:52.051658 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Aug 5 22:21:52.051670 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Aug 5 22:21:52.051682 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Aug 5 22:21:52.051694 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Aug 5 22:21:52.051705 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Aug 5 22:21:52.051716 kernel: printk: bootconsole [earlyser0] enabled Aug 5 22:21:52.051729 kernel: NX (Execute Disable) protection: active Aug 5 22:21:52.051746 kernel: APIC: Static calls initialized Aug 5 22:21:52.051758 kernel: efi: EFI v2.7 by Microsoft Aug 5 22:21:52.051770 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c0a98 Aug 5 22:21:52.051782 kernel: SMBIOS 3.1.0 present. Aug 5 22:21:52.051795 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Aug 5 22:21:52.051808 kernel: Hypervisor detected: Microsoft Hyper-V Aug 5 22:21:52.051821 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Aug 5 22:21:52.051834 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Aug 5 22:21:52.051847 kernel: Hyper-V: Nested features: 0x1e0101 Aug 5 22:21:52.051860 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Aug 5 22:21:52.051896 kernel: Hyper-V: Using hypercall for remote TLB flush Aug 5 22:21:52.051907 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:21:52.051931 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Aug 5 22:21:52.051944 kernel: tsc: Marking TSC unstable due to running on Hyper-V Aug 5 22:21:52.051956 kernel: tsc: Detected 2593.906 MHz processor Aug 5 22:21:52.051968 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 5 22:21:52.051981 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 5 22:21:52.051993 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Aug 5 22:21:52.052005 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Aug 5 22:21:52.052030 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 5 22:21:52.052042 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Aug 5 22:21:52.052053 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Aug 5 22:21:52.052064 kernel: Using GB pages for direct mapping Aug 5 22:21:52.052075 kernel: Secure boot disabled Aug 5 22:21:52.052086 kernel: ACPI: Early table checksum verification disabled Aug 5 22:21:52.052099 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Aug 5 22:21:52.052117 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052132 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052145 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Aug 5 22:21:52.052158 kernel: ACPI: FACS 0x000000003FFFE000 000040 Aug 5 22:21:52.052172 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052185 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052214 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052231 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052245 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052259 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052272 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Aug 5 22:21:52.052286 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Aug 5 22:21:52.052300 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Aug 5 22:21:52.052314 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Aug 5 22:21:52.052327 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Aug 5 22:21:52.052344 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Aug 5 22:21:52.052358 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Aug 5 22:21:52.052371 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Aug 5 22:21:52.052384 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Aug 5 22:21:52.052398 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Aug 5 22:21:52.052412 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Aug 5 22:21:52.052426 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Aug 5 22:21:52.052439 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Aug 5 22:21:52.052452 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Aug 5 22:21:52.052470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Aug 5 22:21:52.052483 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Aug 5 22:21:52.052497 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Aug 5 22:21:52.052511 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Aug 5 22:21:52.052525 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Aug 5 22:21:52.052538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Aug 5 22:21:52.052552 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Aug 5 22:21:52.052566 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Aug 5 22:21:52.052579 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Aug 5 22:21:52.052596 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Aug 5 22:21:52.052609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Aug 5 22:21:52.052623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Aug 5 22:21:52.052637 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Aug 5 22:21:52.052650 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Aug 5 22:21:52.052664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Aug 5 22:21:52.052678 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Aug 5 22:21:52.052692 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Aug 5 22:21:52.052706 kernel: Zone ranges: Aug 5 22:21:52.052722 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 5 22:21:52.052736 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 5 22:21:52.052749 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:21:52.052763 kernel: Movable zone start for each node Aug 5 22:21:52.052776 kernel: Early memory node ranges Aug 5 22:21:52.052789 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Aug 5 22:21:52.052803 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Aug 5 22:21:52.052816 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Aug 5 22:21:52.052830 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Aug 5 22:21:52.052847 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Aug 5 22:21:52.055746 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 5 22:21:52.055769 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Aug 5 22:21:52.055783 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Aug 5 22:21:52.055797 kernel: ACPI: PM-Timer IO Port: 0x408 Aug 5 22:21:52.055811 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Aug 5 22:21:52.055825 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Aug 5 22:21:52.055839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 5 22:21:52.055853 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 5 22:21:52.055888 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Aug 5 22:21:52.055903 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Aug 5 22:21:52.055916 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Aug 5 22:21:52.055931 kernel: Booting paravirtualized kernel on Hyper-V Aug 5 22:21:52.055946 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 5 22:21:52.055960 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 5 22:21:52.055974 kernel: percpu: Embedded 58 pages/cpu s196904 r8192 d32472 u1048576 Aug 5 22:21:52.055988 kernel: pcpu-alloc: s196904 r8192 d32472 u1048576 alloc=1*2097152 Aug 5 22:21:52.056001 kernel: pcpu-alloc: [0] 0 1 Aug 5 22:21:52.056019 kernel: Hyper-V: PV spinlocks enabled Aug 5 22:21:52.056032 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 5 22:21:52.056048 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:21:52.056063 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 5 22:21:52.056077 kernel: random: crng init done Aug 5 22:21:52.056091 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Aug 5 22:21:52.056104 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 5 22:21:52.056118 kernel: Fallback order for Node 0: 0 Aug 5 22:21:52.056136 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Aug 5 22:21:52.056160 kernel: Policy zone: Normal Aug 5 22:21:52.056177 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 5 22:21:52.056191 kernel: software IO TLB: area num 2. Aug 5 22:21:52.056206 kernel: Memory: 8070932K/8387460K available (12288K kernel code, 2302K rwdata, 22640K rodata, 49372K init, 1972K bss, 316268K reserved, 0K cma-reserved) Aug 5 22:21:52.056221 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 5 22:21:52.056236 kernel: ftrace: allocating 37659 entries in 148 pages Aug 5 22:21:52.056250 kernel: ftrace: allocated 148 pages with 3 groups Aug 5 22:21:52.056264 kernel: Dynamic Preempt: voluntary Aug 5 22:21:52.056279 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 5 22:21:52.056294 kernel: rcu: RCU event tracing is enabled. Aug 5 22:21:52.056311 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 5 22:21:52.056326 kernel: Trampoline variant of Tasks RCU enabled. Aug 5 22:21:52.056341 kernel: Rude variant of Tasks RCU enabled. Aug 5 22:21:52.056356 kernel: Tracing variant of Tasks RCU enabled. Aug 5 22:21:52.056369 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 5 22:21:52.056388 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 5 22:21:52.056402 kernel: Using NULL legacy PIC Aug 5 22:21:52.056418 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Aug 5 22:21:52.056434 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 5 22:21:52.056449 kernel: Console: colour dummy device 80x25 Aug 5 22:21:52.056464 kernel: printk: console [tty1] enabled Aug 5 22:21:52.056479 kernel: printk: console [ttyS0] enabled Aug 5 22:21:52.056494 kernel: printk: bootconsole [earlyser0] disabled Aug 5 22:21:52.056509 kernel: ACPI: Core revision 20230628 Aug 5 22:21:52.056524 kernel: Failed to register legacy timer interrupt Aug 5 22:21:52.056543 kernel: APIC: Switch to symmetric I/O mode setup Aug 5 22:21:52.056558 kernel: Hyper-V: enabling crash_kexec_post_notifiers Aug 5 22:21:52.056574 kernel: Hyper-V: Using IPI hypercalls Aug 5 22:21:52.056589 kernel: APIC: send_IPI() replaced with hv_send_ipi() Aug 5 22:21:52.056616 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() Aug 5 22:21:52.056631 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() Aug 5 22:21:52.056646 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() Aug 5 22:21:52.056660 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() Aug 5 22:21:52.056675 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() Aug 5 22:21:52.056692 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Aug 5 22:21:52.056705 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Aug 5 22:21:52.056719 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Aug 5 22:21:52.056733 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 5 22:21:52.056746 kernel: Spectre V2 : Mitigation: Retpolines Aug 5 22:21:52.056759 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Aug 5 22:21:52.056773 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Aug 5 22:21:52.056787 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Aug 5 22:21:52.056799 kernel: RETBleed: Vulnerable Aug 5 22:21:52.056816 kernel: Speculative Store Bypass: Vulnerable Aug 5 22:21:52.056829 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:21:52.056842 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Aug 5 22:21:52.056855 kernel: GDS: Unknown: Dependent on hypervisor status Aug 5 22:21:52.056890 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 5 22:21:52.056918 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 5 22:21:52.056931 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 5 22:21:52.056944 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Aug 5 22:21:52.056957 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Aug 5 22:21:52.056970 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Aug 5 22:21:52.056983 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 5 22:21:52.057000 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Aug 5 22:21:52.057014 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Aug 5 22:21:52.057027 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Aug 5 22:21:52.057041 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Aug 5 22:21:52.057055 kernel: Freeing SMP alternatives memory: 32K Aug 5 22:21:52.057068 kernel: pid_max: default: 32768 minimum: 301 Aug 5 22:21:52.057081 kernel: LSM: initializing lsm=lockdown,capability,selinux,integrity Aug 5 22:21:52.057096 kernel: SELinux: Initializing. Aug 5 22:21:52.057110 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.057125 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.057155 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Aug 5 22:21:52.057169 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:52.057187 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:52.057202 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1. Aug 5 22:21:52.057217 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Aug 5 22:21:52.057230 kernel: signal: max sigframe size: 3632 Aug 5 22:21:52.057245 kernel: rcu: Hierarchical SRCU implementation. Aug 5 22:21:52.057259 kernel: rcu: Max phase no-delay instances is 400. Aug 5 22:21:52.057274 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Aug 5 22:21:52.057288 kernel: smp: Bringing up secondary CPUs ... Aug 5 22:21:52.057302 kernel: smpboot: x86: Booting SMP configuration: Aug 5 22:21:52.057320 kernel: .... node #0, CPUs: #1 Aug 5 22:21:52.057334 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Aug 5 22:21:52.057349 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Aug 5 22:21:52.057364 kernel: smp: Brought up 1 node, 2 CPUs Aug 5 22:21:52.057378 kernel: smpboot: Max logical packages: 1 Aug 5 22:21:52.057393 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Aug 5 22:21:52.057407 kernel: devtmpfs: initialized Aug 5 22:21:52.057422 kernel: x86/mm: Memory block size: 128MB Aug 5 22:21:52.057439 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Aug 5 22:21:52.057454 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 5 22:21:52.057469 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 5 22:21:52.057483 kernel: pinctrl core: initialized pinctrl subsystem Aug 5 22:21:52.057498 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 5 22:21:52.057512 kernel: audit: initializing netlink subsys (disabled) Aug 5 22:21:52.057527 kernel: audit: type=2000 audit(1722896510.028:1): state=initialized audit_enabled=0 res=1 Aug 5 22:21:52.057541 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 5 22:21:52.057556 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 5 22:21:52.057573 kernel: cpuidle: using governor menu Aug 5 22:21:52.057588 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 5 22:21:52.057602 kernel: dca service started, version 1.12.1 Aug 5 22:21:52.057617 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Aug 5 22:21:52.057632 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 5 22:21:52.057647 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 5 22:21:52.057662 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 5 22:21:52.057676 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 5 22:21:52.057691 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 5 22:21:52.057709 kernel: ACPI: Added _OSI(Module Device) Aug 5 22:21:52.057723 kernel: ACPI: Added _OSI(Processor Device) Aug 5 22:21:52.057738 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Aug 5 22:21:52.057753 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 5 22:21:52.057767 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 5 22:21:52.057782 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Aug 5 22:21:52.057797 kernel: ACPI: Interpreter enabled Aug 5 22:21:52.057812 kernel: ACPI: PM: (supports S0 S5) Aug 5 22:21:52.057826 kernel: ACPI: Using IOAPIC for interrupt routing Aug 5 22:21:52.057844 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 5 22:21:52.057858 kernel: PCI: Ignoring E820 reservations for host bridge windows Aug 5 22:21:52.057883 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Aug 5 22:21:52.057896 kernel: iommu: Default domain type: Translated Aug 5 22:21:52.057910 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 5 22:21:52.057923 kernel: efivars: Registered efivars operations Aug 5 22:21:52.057938 kernel: PCI: Using ACPI for IRQ routing Aug 5 22:21:52.057955 kernel: PCI: System does not support PCI Aug 5 22:21:52.057970 kernel: vgaarb: loaded Aug 5 22:21:52.057988 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Aug 5 22:21:52.058004 kernel: VFS: Disk quotas dquot_6.6.0 Aug 5 22:21:52.058018 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 5 22:21:52.058032 kernel: pnp: PnP ACPI init Aug 5 22:21:52.058046 kernel: pnp: PnP ACPI: found 3 devices Aug 5 22:21:52.058061 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 5 22:21:52.058075 kernel: NET: Registered PF_INET protocol family Aug 5 22:21:52.058089 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Aug 5 22:21:52.058105 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Aug 5 22:21:52.058122 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 5 22:21:52.058136 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 5 22:21:52.058150 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Aug 5 22:21:52.058165 kernel: TCP: Hash tables configured (established 65536 bind 65536) Aug 5 22:21:52.058180 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.058195 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Aug 5 22:21:52.058209 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 5 22:21:52.058224 kernel: NET: Registered PF_XDP protocol family Aug 5 22:21:52.058238 kernel: PCI: CLS 0 bytes, default 64 Aug 5 22:21:52.058256 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 5 22:21:52.058270 kernel: software IO TLB: mapped [mem 0x000000003b5c0000-0x000000003f5c0000] (64MB) Aug 5 22:21:52.058284 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Aug 5 22:21:52.058299 kernel: Initialise system trusted keyrings Aug 5 22:21:52.058313 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Aug 5 22:21:52.058328 kernel: Key type asymmetric registered Aug 5 22:21:52.058343 kernel: Asymmetric key parser 'x509' registered Aug 5 22:21:52.058356 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Aug 5 22:21:52.058370 kernel: io scheduler mq-deadline registered Aug 5 22:21:52.058387 kernel: io scheduler kyber registered Aug 5 22:21:52.058400 kernel: io scheduler bfq registered Aug 5 22:21:52.058413 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 5 22:21:52.058428 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 5 22:21:52.058443 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 5 22:21:52.058458 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Aug 5 22:21:52.058473 kernel: i8042: PNP: No PS/2 controller found. Aug 5 22:21:52.058648 kernel: rtc_cmos 00:02: registered as rtc0 Aug 5 22:21:52.061645 kernel: rtc_cmos 00:02: setting system clock to 2024-08-05T22:21:51 UTC (1722896511) Aug 5 22:21:52.061774 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Aug 5 22:21:52.061794 kernel: intel_pstate: CPU model not supported Aug 5 22:21:52.061810 kernel: efifb: probing for efifb Aug 5 22:21:52.061826 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Aug 5 22:21:52.061841 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Aug 5 22:21:52.061856 kernel: efifb: scrolling: redraw Aug 5 22:21:52.064527 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Aug 5 22:21:52.064552 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:21:52.064568 kernel: fb0: EFI VGA frame buffer device Aug 5 22:21:52.064583 kernel: pstore: Using crash dump compression: deflate Aug 5 22:21:52.064599 kernel: pstore: Registered efi_pstore as persistent store backend Aug 5 22:21:52.064614 kernel: NET: Registered PF_INET6 protocol family Aug 5 22:21:52.064629 kernel: Segment Routing with IPv6 Aug 5 22:21:52.064644 kernel: In-situ OAM (IOAM) with IPv6 Aug 5 22:21:52.064660 kernel: NET: Registered PF_PACKET protocol family Aug 5 22:21:52.064675 kernel: Key type dns_resolver registered Aug 5 22:21:52.064690 kernel: IPI shorthand broadcast: enabled Aug 5 22:21:52.064708 kernel: sched_clock: Marking stable (837003300, 50535400)->(1106464600, -218925900) Aug 5 22:21:52.064723 kernel: registered taskstats version 1 Aug 5 22:21:52.064738 kernel: Loading compiled-in X.509 certificates Aug 5 22:21:52.064753 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.43-flatcar: d8f193b4a33a492a73da7ce4522bbc835ec39532' Aug 5 22:21:52.064768 kernel: Key type .fscrypt registered Aug 5 22:21:52.064783 kernel: Key type fscrypt-provisioning registered Aug 5 22:21:52.064798 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 5 22:21:52.064813 kernel: ima: Allocated hash algorithm: sha1 Aug 5 22:21:52.064831 kernel: ima: No architecture policies found Aug 5 22:21:52.064846 kernel: clk: Disabling unused clocks Aug 5 22:21:52.064873 kernel: Freeing unused kernel image (initmem) memory: 49372K Aug 5 22:21:52.064887 kernel: Write protecting the kernel read-only data: 36864k Aug 5 22:21:52.064907 kernel: Freeing unused kernel image (rodata/data gap) memory: 1936K Aug 5 22:21:52.064921 kernel: Run /init as init process Aug 5 22:21:52.064934 kernel: with arguments: Aug 5 22:21:52.064948 kernel: /init Aug 5 22:21:52.064963 kernel: with environment: Aug 5 22:21:52.064981 kernel: HOME=/ Aug 5 22:21:52.064995 kernel: TERM=linux Aug 5 22:21:52.065010 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 5 22:21:52.065027 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:21:52.065045 systemd[1]: Detected virtualization microsoft. Aug 5 22:21:52.065061 systemd[1]: Detected architecture x86-64. Aug 5 22:21:52.065076 systemd[1]: Running in initrd. Aug 5 22:21:52.065090 systemd[1]: No hostname configured, using default hostname. Aug 5 22:21:52.065108 systemd[1]: Hostname set to . Aug 5 22:21:52.065124 systemd[1]: Initializing machine ID from random generator. Aug 5 22:21:52.065138 systemd[1]: Queued start job for default target initrd.target. Aug 5 22:21:52.065154 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:52.065169 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:52.065185 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 5 22:21:52.065201 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:21:52.065217 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 5 22:21:52.065235 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 5 22:21:52.065252 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 5 22:21:52.065268 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 5 22:21:52.065284 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:52.065299 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:52.065314 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:21:52.065329 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:21:52.065347 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:21:52.065361 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:21:52.065376 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:21:52.065391 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:21:52.065406 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 5 22:21:52.065421 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Aug 5 22:21:52.065436 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:52.065451 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:52.065468 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:52.065483 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:21:52.065498 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 5 22:21:52.065512 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:21:52.065528 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 5 22:21:52.065542 systemd[1]: Starting systemd-fsck-usr.service... Aug 5 22:21:52.065557 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:21:52.065572 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:21:52.065587 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:52.065604 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 5 22:21:52.065641 systemd-journald[176]: Collecting audit messages is disabled. Aug 5 22:21:52.065674 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:52.065689 systemd[1]: Finished systemd-fsck-usr.service. Aug 5 22:21:52.065710 systemd-journald[176]: Journal started Aug 5 22:21:52.065754 systemd-journald[176]: Runtime Journal (/run/log/journal/e8f58c2785ef455cbff96b089cae3dea) is 8.0M, max 158.8M, 150.8M free. Aug 5 22:21:52.059934 systemd-modules-load[177]: Inserted module 'overlay' Aug 5 22:21:52.076942 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 5 22:21:52.096031 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:21:52.096765 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:52.102311 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 5 22:21:52.115897 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 5 22:21:52.120608 systemd-modules-load[177]: Inserted module 'br_netfilter' Aug 5 22:21:52.122779 kernel: Bridge firewalling registered Aug 5 22:21:52.123013 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:52.125991 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:21:52.130497 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:21:52.144194 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:52.154559 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:21:52.155660 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:52.158348 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:52.164691 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:52.175011 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 5 22:21:52.183052 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:52.194040 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:21:52.202537 dracut-cmdline[212]: dracut-dracut-053 Aug 5 22:21:52.205913 dracut-cmdline[212]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4763ee6059e6f81f5b007c7bdf42f5dcad676aac40503ddb8a29787eba4ab695 Aug 5 22:21:52.229475 systemd-resolved[218]: Positive Trust Anchors: Aug 5 22:21:52.229496 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:21:52.229547 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:21:52.251821 systemd-resolved[218]: Defaulting to hostname 'linux'. Aug 5 22:21:52.255055 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:21:52.257546 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:52.304894 kernel: SCSI subsystem initialized Aug 5 22:21:52.315886 kernel: Loading iSCSI transport class v2.0-870. Aug 5 22:21:52.328889 kernel: iscsi: registered transport (tcp) Aug 5 22:21:52.354172 kernel: iscsi: registered transport (qla4xxx) Aug 5 22:21:52.354256 kernel: QLogic iSCSI HBA Driver Aug 5 22:21:52.389585 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 5 22:21:52.399264 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 5 22:21:52.428836 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 5 22:21:52.428927 kernel: device-mapper: uevent: version 1.0.3 Aug 5 22:21:52.431798 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Aug 5 22:21:52.475889 kernel: raid6: avx512x4 gen() 18503 MB/s Aug 5 22:21:52.493875 kernel: raid6: avx512x2 gen() 18323 MB/s Aug 5 22:21:52.512873 kernel: raid6: avx512x1 gen() 18297 MB/s Aug 5 22:21:52.531878 kernel: raid6: avx2x4 gen() 18325 MB/s Aug 5 22:21:52.550873 kernel: raid6: avx2x2 gen() 18367 MB/s Aug 5 22:21:52.570705 kernel: raid6: avx2x1 gen() 13987 MB/s Aug 5 22:21:52.570743 kernel: raid6: using algorithm avx512x4 gen() 18503 MB/s Aug 5 22:21:52.592336 kernel: raid6: .... xor() 7970 MB/s, rmw enabled Aug 5 22:21:52.592375 kernel: raid6: using avx512x2 recovery algorithm Aug 5 22:21:52.617888 kernel: xor: automatically using best checksumming function avx Aug 5 22:21:52.782890 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 5 22:21:52.792469 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:21:52.802025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:52.818397 systemd-udevd[398]: Using default interface naming scheme 'v255'. Aug 5 22:21:52.824564 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:52.839519 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 5 22:21:52.850586 dracut-pre-trigger[410]: rd.md=0: removing MD RAID activation Aug 5 22:21:52.876987 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:21:52.885138 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:21:52.922558 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:52.936453 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 5 22:21:52.973238 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 5 22:21:52.979428 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:21:52.985893 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:52.991542 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:21:53.001256 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 5 22:21:53.015882 kernel: cryptd: max_cpu_qlen set to 1000 Aug 5 22:21:53.017762 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:21:53.020502 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:53.027082 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:53.029680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:53.029801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:53.032408 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:53.049649 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:53.053183 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:21:53.076278 kernel: AVX2 version of gcm_enc/dec engaged. Aug 5 22:21:53.076335 kernel: AES CTR mode by8 optimization enabled Aug 5 22:21:53.077024 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:53.080977 kernel: hv_vmbus: Vmbus version:5.2 Aug 5 22:21:53.077595 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:53.092898 kernel: hv_vmbus: registering driver hyperv_keyboard Aug 5 22:21:53.093152 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:21:53.114667 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:53.128905 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Aug 5 22:21:53.130172 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 5 22:21:53.161936 kernel: hv_vmbus: registering driver hv_storvsc Aug 5 22:21:53.165607 kernel: scsi host0: storvsc_host_t Aug 5 22:21:53.165659 kernel: hv_vmbus: registering driver hv_netvsc Aug 5 22:21:53.177738 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Aug 5 22:21:53.177820 kernel: pps_core: LinuxPPS API ver. 1 registered Aug 5 22:21:53.177842 kernel: scsi host1: storvsc_host_t Aug 5 22:21:53.177885 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Aug 5 22:21:53.188432 kernel: hid: raw HID events driver (C) Jiri Kosina Aug 5 22:21:53.188474 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Aug 5 22:21:53.186717 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:53.214395 kernel: PTP clock support registered Aug 5 22:21:53.214447 kernel: hv_vmbus: registering driver hid_hyperv Aug 5 22:21:53.231905 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Aug 5 22:21:53.231958 kernel: hv_utils: Registering HyperV Utility Driver Aug 5 22:21:53.231978 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Aug 5 22:21:53.232194 kernel: hv_vmbus: registering driver hv_utils Aug 5 22:21:53.236628 kernel: hv_utils: Heartbeat IC version 3.0 Aug 5 22:21:53.236695 kernel: hv_utils: Shutdown IC version 3.2 Aug 5 22:21:53.238596 kernel: hv_utils: TimeSync IC version 4.0 Aug 5 22:21:53.196626 systemd-resolved[218]: Clock change detected. Flushing caches. Aug 5 22:21:53.205015 systemd-journald[176]: Time jumped backwards, rotating. Aug 5 22:21:53.217462 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Aug 5 22:21:53.223005 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Aug 5 22:21:53.223019 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Aug 5 22:21:53.237211 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Aug 5 22:21:53.237406 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Aug 5 22:21:53.238927 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 5 22:21:53.239094 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Aug 5 22:21:53.239266 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Aug 5 22:21:53.239422 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:21:53.239437 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 5 22:21:53.321539 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: VF slot 1 added Aug 5 22:21:53.339469 kernel: hv_vmbus: registering driver hv_pci Aug 5 22:21:53.344764 kernel: hv_pci 8e2c3fc4-a020-4cee-af9a-87cfed70501e: PCI VMBus probing: Using version 0x10004 Aug 5 22:21:53.393739 kernel: hv_pci 8e2c3fc4-a020-4cee-af9a-87cfed70501e: PCI host bridge to bus a020:00 Aug 5 22:21:53.393891 kernel: pci_bus a020:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Aug 5 22:21:53.394023 kernel: pci_bus a020:00: No busn resource found for root bus, will use [bus 00-ff] Aug 5 22:21:53.394147 kernel: pci a020:00:02.0: [15b3:1016] type 00 class 0x020000 Aug 5 22:21:53.394277 kernel: pci a020:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:21:53.394395 kernel: pci a020:00:02.0: enabling Extended Tags Aug 5 22:21:53.394537 kernel: pci a020:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at a020:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Aug 5 22:21:53.394657 kernel: pci_bus a020:00: busn_res: [bus 00-ff] end is updated to 00 Aug 5 22:21:53.394765 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (447) Aug 5 22:21:53.394778 kernel: pci a020:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Aug 5 22:21:53.383675 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Aug 5 22:21:53.421368 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 22:21:53.432582 kernel: BTRFS: device fsid 24d7efdf-5582-42d2-aafd-43221656b08f devid 1 transid 36 /dev/sda3 scanned by (udev-worker) (442) Aug 5 22:21:53.460749 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Aug 5 22:21:53.484082 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Aug 5 22:21:53.487052 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Aug 5 22:21:53.502690 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 5 22:21:53.538200 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:21:53.696635 kernel: mlx5_core a020:00:02.0: enabling device (0000 -> 0002) Aug 5 22:21:53.925942 kernel: mlx5_core a020:00:02.0: firmware version: 14.30.1284 Aug 5 22:21:53.926172 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: VF registering: eth1 Aug 5 22:21:53.926341 kernel: mlx5_core a020:00:02.0 eth1: joined to eth0 Aug 5 22:21:53.926548 kernel: mlx5_core a020:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) Aug 5 22:21:53.932483 kernel: mlx5_core a020:00:02.0 enP40992s1: renamed from eth1 Aug 5 22:21:54.553268 disk-uuid[596]: The operation has completed successfully. Aug 5 22:21:54.556606 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 5 22:21:54.647989 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 5 22:21:54.648101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 5 22:21:54.662644 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 5 22:21:54.667976 sh[691]: Success Aug 5 22:21:54.688768 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Aug 5 22:21:54.776194 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 5 22:21:54.795568 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 5 22:21:54.803528 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 5 22:21:54.836484 kernel: BTRFS info (device dm-0): first mount of filesystem 24d7efdf-5582-42d2-aafd-43221656b08f Aug 5 22:21:54.836529 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:54.841954 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Aug 5 22:21:54.844909 kernel: BTRFS info (device dm-0): disabling log replay at mount time Aug 5 22:21:54.847553 kernel: BTRFS info (device dm-0): using free space tree Aug 5 22:21:54.905781 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 5 22:21:54.910436 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 5 22:21:54.920613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 5 22:21:54.925602 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 5 22:21:54.939700 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:54.939746 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:54.941395 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:21:54.952098 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:21:54.963653 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:54.963238 systemd[1]: mnt-oem.mount: Deactivated successfully. Aug 5 22:21:54.973735 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 5 22:21:54.986608 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 5 22:21:55.028930 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:21:55.036725 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:21:55.058516 systemd-networkd[875]: lo: Link UP Aug 5 22:21:55.058524 systemd-networkd[875]: lo: Gained carrier Aug 5 22:21:55.060792 systemd-networkd[875]: Enumeration completed Aug 5 22:21:55.061290 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:21:55.062762 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:55.062765 systemd-networkd[875]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:21:55.064471 systemd[1]: Reached target network.target - Network. Aug 5 22:21:55.117477 kernel: mlx5_core a020:00:02.0 enP40992s1: Link up Aug 5 22:21:55.150488 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: Data path switched to VF: enP40992s1 Aug 5 22:21:55.151328 systemd-networkd[875]: enP40992s1: Link UP Aug 5 22:21:55.151574 systemd-networkd[875]: eth0: Link UP Aug 5 22:21:55.151834 systemd-networkd[875]: eth0: Gained carrier Aug 5 22:21:55.151862 systemd-networkd[875]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:21:55.161777 systemd-networkd[875]: enP40992s1: Gained carrier Aug 5 22:21:55.203574 systemd-networkd[875]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 5 22:21:55.402628 ignition[796]: Ignition 2.19.0 Aug 5 22:21:55.402643 ignition[796]: Stage: fetch-offline Aug 5 22:21:55.404405 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:21:55.402691 ignition[796]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.402702 ignition[796]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.402830 ignition[796]: parsed url from cmdline: "" Aug 5 22:21:55.402836 ignition[796]: no config URL provided Aug 5 22:21:55.402843 ignition[796]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:21:55.402854 ignition[796]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:21:55.402861 ignition[796]: failed to fetch config: resource requires networking Aug 5 22:21:55.403085 ignition[796]: Ignition finished successfully Aug 5 22:21:55.434612 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 5 22:21:55.448853 ignition[884]: Ignition 2.19.0 Aug 5 22:21:55.448863 ignition[884]: Stage: fetch Aug 5 22:21:55.449073 ignition[884]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.449085 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.449179 ignition[884]: parsed url from cmdline: "" Aug 5 22:21:55.449183 ignition[884]: no config URL provided Aug 5 22:21:55.449187 ignition[884]: reading system config file "/usr/lib/ignition/user.ign" Aug 5 22:21:55.449194 ignition[884]: no config at "/usr/lib/ignition/user.ign" Aug 5 22:21:55.449212 ignition[884]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Aug 5 22:21:55.546105 ignition[884]: GET result: OK Aug 5 22:21:55.546202 ignition[884]: config has been read from IMDS userdata Aug 5 22:21:55.546231 ignition[884]: parsing config with SHA512: 051230907dc2555536046337eed6e1776274d107febcf782fc5895610b726bdbd6843f9b12a06f535c87877e53a0362a0b296e3643fdc48aa65d3efe0c09f581 Aug 5 22:21:55.554920 unknown[884]: fetched base config from "system" Aug 5 22:21:55.555063 unknown[884]: fetched base config from "system" Aug 5 22:21:55.555649 ignition[884]: fetch: fetch complete Aug 5 22:21:55.555069 unknown[884]: fetched user config from "azure" Aug 5 22:21:55.555654 ignition[884]: fetch: fetch passed Aug 5 22:21:55.559930 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 5 22:21:55.555700 ignition[884]: Ignition finished successfully Aug 5 22:21:55.573632 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 5 22:21:55.589766 ignition[891]: Ignition 2.19.0 Aug 5 22:21:55.589777 ignition[891]: Stage: kargs Aug 5 22:21:55.589996 ignition[891]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.591898 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 5 22:21:55.590009 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.590898 ignition[891]: kargs: kargs passed Aug 5 22:21:55.590943 ignition[891]: Ignition finished successfully Aug 5 22:21:55.606807 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 5 22:21:55.622214 ignition[898]: Ignition 2.19.0 Aug 5 22:21:55.622224 ignition[898]: Stage: disks Aug 5 22:21:55.622442 ignition[898]: no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:55.624312 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 5 22:21:55.622470 ignition[898]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:55.623404 ignition[898]: disks: disks passed Aug 5 22:21:55.623444 ignition[898]: Ignition finished successfully Aug 5 22:21:55.637140 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 5 22:21:55.642035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 5 22:21:55.645119 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:21:55.652198 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:21:55.656751 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:21:55.664892 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 5 22:21:55.689398 systemd-fsck[907]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Aug 5 22:21:55.692474 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 5 22:21:55.705392 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 5 22:21:55.806604 kernel: EXT4-fs (sda9): mounted filesystem b6919f21-4a66-43c1-b816-e6fe5d1b75ef r/w with ordered data mode. Quota mode: none. Aug 5 22:21:55.807173 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 5 22:21:55.811353 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 5 22:21:55.827543 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:21:55.834949 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 5 22:21:55.844487 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (918) Aug 5 22:21:55.853491 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:55.853546 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:55.853570 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:21:55.854679 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Aug 5 22:21:55.862877 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 5 22:21:55.867702 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:21:55.863529 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:21:55.871776 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:21:55.876403 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 5 22:21:55.883645 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 5 22:21:56.091219 coreos-metadata[920]: Aug 05 22:21:56.091 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 22:21:56.096516 coreos-metadata[920]: Aug 05 22:21:56.096 INFO Fetch successful Aug 5 22:21:56.099338 coreos-metadata[920]: Aug 05 22:21:56.096 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Aug 5 22:21:56.114430 coreos-metadata[920]: Aug 05 22:21:56.114 INFO Fetch successful Aug 5 22:21:56.118385 coreos-metadata[920]: Aug 05 22:21:56.118 INFO wrote hostname ci-4012.1.0-a-bfd2eb4520 to /sysroot/etc/hostname Aug 5 22:21:56.123801 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:21:56.149835 initrd-setup-root[947]: cut: /sysroot/etc/passwd: No such file or directory Aug 5 22:21:56.158626 initrd-setup-root[954]: cut: /sysroot/etc/group: No such file or directory Aug 5 22:21:56.165954 initrd-setup-root[961]: cut: /sysroot/etc/shadow: No such file or directory Aug 5 22:21:56.171041 initrd-setup-root[968]: cut: /sysroot/etc/gshadow: No such file or directory Aug 5 22:21:56.395624 systemd-networkd[875]: eth0: Gained IPv6LL Aug 5 22:21:56.450345 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 5 22:21:56.459553 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 5 22:21:56.465975 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 5 22:21:56.471389 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 5 22:21:56.477954 kernel: BTRFS info (device sda6): last unmount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:56.504669 ignition[1035]: INFO : Ignition 2.19.0 Aug 5 22:21:56.504669 ignition[1035]: INFO : Stage: mount Aug 5 22:21:56.504669 ignition[1035]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:56.504669 ignition[1035]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:56.504669 ignition[1035]: INFO : mount: mount passed Aug 5 22:21:56.504669 ignition[1035]: INFO : Ignition finished successfully Aug 5 22:21:56.504507 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 5 22:21:56.521066 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 5 22:21:56.527486 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 5 22:21:56.540654 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 5 22:21:56.549467 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1048) Aug 5 22:21:56.553473 kernel: BTRFS info (device sda6): first mount of filesystem b97abe4c-c512-4c9a-9e43-191f8cef484b Aug 5 22:21:56.553517 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 5 22:21:56.557392 kernel: BTRFS info (device sda6): using free space tree Aug 5 22:21:56.562468 kernel: BTRFS info (device sda6): auto enabling async discard Aug 5 22:21:56.564021 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 5 22:21:56.587848 systemd-networkd[875]: enP40992s1: Gained IPv6LL Aug 5 22:21:56.590146 ignition[1065]: INFO : Ignition 2.19.0 Aug 5 22:21:56.590146 ignition[1065]: INFO : Stage: files Aug 5 22:21:56.590146 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:56.590146 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:56.590146 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Aug 5 22:21:56.600901 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 5 22:21:56.600901 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 5 22:21:56.607224 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 5 22:21:56.607224 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 5 22:21:56.607224 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 5 22:21:56.605849 unknown[1065]: wrote ssh authorized keys file for user: core Aug 5 22:21:56.622377 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:21:56.622377 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 5 22:21:56.814474 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 5 22:21:56.874425 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:56.881286 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.28.7-x86-64.raw: attempt #1 Aug 5 22:21:57.382373 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 5 22:21:57.523002 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.28.7-x86-64.raw" Aug 5 22:21:57.523002 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 5 22:21:57.531258 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:21:57.535591 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 5 22:21:57.535591 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 5 22:21:57.535591 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Aug 5 22:21:57.545633 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Aug 5 22:21:57.548806 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:21:57.555084 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 5 22:21:57.555084 ignition[1065]: INFO : files: files passed Aug 5 22:21:57.555084 ignition[1065]: INFO : Ignition finished successfully Aug 5 22:21:57.550630 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 5 22:21:57.563635 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 5 22:21:57.569784 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 5 22:21:57.577853 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 5 22:21:57.577956 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 5 22:21:57.592900 initrd-setup-root-after-ignition[1094]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:57.592900 initrd-setup-root-after-ignition[1094]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:57.601652 initrd-setup-root-after-ignition[1098]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 5 22:21:57.606506 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:21:57.609397 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 5 22:21:57.624786 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 5 22:21:57.650636 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 5 22:21:57.650746 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 5 22:21:57.656694 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 5 22:21:57.663975 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 5 22:21:57.666586 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 5 22:21:57.672624 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 5 22:21:57.684930 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:21:57.692600 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 5 22:21:57.702882 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:21:57.707871 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:21:57.713000 systemd[1]: Stopped target timers.target - Timer Units. Aug 5 22:21:57.717000 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 5 22:21:57.717148 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 5 22:21:57.724371 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 5 22:21:57.728864 systemd[1]: Stopped target basic.target - Basic System. Aug 5 22:21:57.730803 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 5 22:21:57.737319 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 5 22:21:57.740088 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 5 22:21:57.747367 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 5 22:21:57.749789 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 5 22:21:57.754551 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 5 22:21:57.759597 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 5 22:21:57.766046 systemd[1]: Stopped target swap.target - Swaps. Aug 5 22:21:57.767810 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 5 22:21:57.767928 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 5 22:21:57.776390 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:21:57.781083 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:21:57.783812 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 5 22:21:57.786157 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:21:57.788971 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 5 22:21:57.789110 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 5 22:21:57.800248 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 5 22:21:57.800429 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 5 22:21:57.805550 systemd[1]: ignition-files.service: Deactivated successfully. Aug 5 22:21:57.810373 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 5 22:21:57.814654 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Aug 5 22:21:57.814789 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Aug 5 22:21:57.831232 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 5 22:21:57.833268 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 5 22:21:57.833431 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:21:57.850414 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 5 22:21:57.852586 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 5 22:21:57.860102 ignition[1118]: INFO : Ignition 2.19.0 Aug 5 22:21:57.860102 ignition[1118]: INFO : Stage: umount Aug 5 22:21:57.860102 ignition[1118]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 5 22:21:57.860102 ignition[1118]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Aug 5 22:21:57.860102 ignition[1118]: INFO : umount: umount passed Aug 5 22:21:57.860102 ignition[1118]: INFO : Ignition finished successfully Aug 5 22:21:57.852748 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:21:57.855810 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 5 22:21:57.856053 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 5 22:21:57.870785 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 5 22:21:57.870874 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 5 22:21:57.874758 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 5 22:21:57.874846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 5 22:21:57.880165 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 5 22:21:57.880235 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 5 22:21:57.885419 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 5 22:21:57.885540 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 5 22:21:57.889374 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 5 22:21:57.889423 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 5 22:21:57.891738 systemd[1]: Stopped target network.target - Network. Aug 5 22:21:57.896179 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 5 22:21:57.896239 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 5 22:21:57.900687 systemd[1]: Stopped target paths.target - Path Units. Aug 5 22:21:57.901145 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 5 22:21:57.904511 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:21:57.938203 systemd[1]: Stopped target slices.target - Slice Units. Aug 5 22:21:57.940161 systemd[1]: Stopped target sockets.target - Socket Units. Aug 5 22:21:57.944000 systemd[1]: iscsid.socket: Deactivated successfully. Aug 5 22:21:57.944058 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 5 22:21:57.946287 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 5 22:21:57.950025 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 5 22:21:57.958068 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 5 22:21:57.958137 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 5 22:21:57.962185 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 5 22:21:57.962237 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 5 22:21:57.965089 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 5 22:21:57.969321 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 5 22:21:57.970522 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 5 22:21:57.977576 systemd-networkd[875]: eth0: DHCPv6 lease lost Aug 5 22:21:57.979528 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 5 22:21:57.979654 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 5 22:21:57.986463 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 5 22:21:57.986602 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 5 22:21:57.991180 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 5 22:21:57.991257 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:21:58.012633 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 5 22:21:58.016839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 5 22:21:58.016911 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 5 22:21:58.024761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 5 22:21:58.024829 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:21:58.033074 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 5 22:21:58.033139 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 5 22:21:58.037791 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 5 22:21:58.037849 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:21:58.047034 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:21:58.071155 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 5 22:21:58.071327 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:21:58.076590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 5 22:21:58.076640 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 5 22:21:58.087831 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 5 22:21:58.087885 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:21:58.094795 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 5 22:21:58.094863 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 5 22:21:58.105251 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 5 22:21:58.107332 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: Data path switched from VF: enP40992s1 Aug 5 22:21:58.105320 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 5 22:21:58.110083 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 5 22:21:58.110130 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 5 22:21:58.122665 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 5 22:21:58.128255 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 5 22:21:58.128330 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:21:58.131336 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:21:58.131403 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:21:58.134352 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 5 22:21:58.134441 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 5 22:21:58.149603 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 5 22:21:58.149702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 5 22:21:58.917189 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 5 22:21:58.917321 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 5 22:21:58.920380 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 5 22:21:58.925224 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 5 22:21:58.925281 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 5 22:21:58.942624 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 5 22:21:58.951915 systemd[1]: Switching root. Aug 5 22:21:59.033710 systemd-journald[176]: Journal stopped Aug 5 22:22:02.044301 systemd-journald[176]: Received SIGTERM from PID 1 (systemd). Aug 5 22:22:02.044333 kernel: SELinux: policy capability network_peer_controls=1 Aug 5 22:22:02.044347 kernel: SELinux: policy capability open_perms=1 Aug 5 22:22:02.044356 kernel: SELinux: policy capability extended_socket_class=1 Aug 5 22:22:02.044367 kernel: SELinux: policy capability always_check_network=0 Aug 5 22:22:02.044375 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 5 22:22:02.044388 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 5 22:22:02.044400 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 5 22:22:02.044410 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 5 22:22:02.044421 kernel: audit: type=1403 audit(1722896520.632:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 5 22:22:02.044433 systemd[1]: Successfully loaded SELinux policy in 74.264ms. Aug 5 22:22:02.044446 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.332ms. Aug 5 22:22:02.050623 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Aug 5 22:22:02.050646 systemd[1]: Detected virtualization microsoft. Aug 5 22:22:02.050670 systemd[1]: Detected architecture x86-64. Aug 5 22:22:02.050687 systemd[1]: Detected first boot. Aug 5 22:22:02.050705 systemd[1]: Hostname set to . Aug 5 22:22:02.050721 systemd[1]: Initializing machine ID from random generator. Aug 5 22:22:02.050737 zram_generator::config[1161]: No configuration found. Aug 5 22:22:02.050760 systemd[1]: Populated /etc with preset unit settings. Aug 5 22:22:02.050779 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 5 22:22:02.050796 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 5 22:22:02.050813 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 5 22:22:02.050831 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 5 22:22:02.050850 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 5 22:22:02.050868 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 5 22:22:02.050888 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 5 22:22:02.050906 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 5 22:22:02.050923 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 5 22:22:02.050941 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 5 22:22:02.050957 systemd[1]: Created slice user.slice - User and Session Slice. Aug 5 22:22:02.050974 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 5 22:22:02.050992 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 5 22:22:02.051009 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 5 22:22:02.051029 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 5 22:22:02.051047 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 5 22:22:02.051064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 5 22:22:02.051081 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 5 22:22:02.051098 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 5 22:22:02.051115 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 5 22:22:02.051137 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 5 22:22:02.051154 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 5 22:22:02.051175 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 5 22:22:02.051192 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 5 22:22:02.051210 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 5 22:22:02.051229 systemd[1]: Reached target slices.target - Slice Units. Aug 5 22:22:02.051246 systemd[1]: Reached target swap.target - Swaps. Aug 5 22:22:02.051263 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 5 22:22:02.051281 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 5 22:22:02.051300 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 5 22:22:02.051317 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 5 22:22:02.051335 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 5 22:22:02.051353 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 5 22:22:02.051371 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 5 22:22:02.051391 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 5 22:22:02.051408 systemd[1]: Mounting media.mount - External Media Directory... Aug 5 22:22:02.051426 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:02.051444 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 5 22:22:02.051479 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 5 22:22:02.051498 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 5 22:22:02.051517 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 5 22:22:02.051534 systemd[1]: Reached target machines.target - Containers. Aug 5 22:22:02.051555 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 5 22:22:02.051573 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:02.051590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 5 22:22:02.051608 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 5 22:22:02.051626 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:22:02.051644 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:22:02.051663 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:22:02.051680 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 5 22:22:02.051698 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:22:02.051719 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 5 22:22:02.051736 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 5 22:22:02.051754 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 5 22:22:02.051772 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 5 22:22:02.051789 systemd[1]: Stopped systemd-fsck-usr.service. Aug 5 22:22:02.051807 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 5 22:22:02.051823 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 5 22:22:02.051867 systemd-journald[1252]: Collecting audit messages is disabled. Aug 5 22:22:02.051907 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 5 22:22:02.051925 kernel: loop: module loaded Aug 5 22:22:02.051942 systemd-journald[1252]: Journal started Aug 5 22:22:02.051979 systemd-journald[1252]: Runtime Journal (/run/log/journal/24a0034a73574cdcbde06bd356043db2) is 8.0M, max 158.8M, 150.8M free. Aug 5 22:22:01.511844 systemd[1]: Queued start job for default target multi-user.target. Aug 5 22:22:01.552958 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 5 22:22:01.553333 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 5 22:22:02.063715 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 5 22:22:02.089644 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 5 22:22:02.089721 systemd[1]: verity-setup.service: Deactivated successfully. Aug 5 22:22:02.103373 kernel: fuse: init (API version 7.39) Aug 5 22:22:02.103444 systemd[1]: Stopped verity-setup.service. Aug 5 22:22:02.103493 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:02.113477 systemd[1]: Started systemd-journald.service - Journal Service. Aug 5 22:22:02.119126 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 5 22:22:02.124347 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 5 22:22:02.127995 systemd[1]: Mounted media.mount - External Media Directory. Aug 5 22:22:02.131809 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 5 22:22:02.134648 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 5 22:22:02.137681 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 5 22:22:02.140496 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 5 22:22:02.144080 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 5 22:22:02.144266 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 5 22:22:02.148044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:22:02.148214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:22:02.153088 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:22:02.153835 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:22:02.157409 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 5 22:22:02.158296 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 5 22:22:02.161340 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:22:02.162734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:22:02.166044 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 5 22:22:02.169825 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 5 22:22:02.182119 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 5 22:22:02.195898 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 5 22:22:02.203165 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 5 22:22:02.213634 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 5 22:22:02.221525 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 5 22:22:02.225894 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 5 22:22:02.225947 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 5 22:22:02.232897 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Aug 5 22:22:02.240613 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 5 22:22:02.248559 kernel: ACPI: bus type drm_connector registered Aug 5 22:22:02.249317 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 5 22:22:02.251785 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:02.260347 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 5 22:22:02.268435 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 5 22:22:02.271792 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:22:02.272994 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 5 22:22:02.275420 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:22:02.278617 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 5 22:22:02.287617 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 5 22:22:02.291605 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Aug 5 22:22:02.295576 systemd-journald[1252]: Time spent on flushing to /var/log/journal/24a0034a73574cdcbde06bd356043db2 is 159.950ms for 953 entries. Aug 5 22:22:02.295576 systemd-journald[1252]: System Journal (/var/log/journal/24a0034a73574cdcbde06bd356043db2) is 11.9M, max 2.6G, 2.6G free. Aug 5 22:22:02.538382 systemd-journald[1252]: Received client request to flush runtime journal. Aug 5 22:22:02.538437 kernel: loop0: detected capacity change from 0 to 62456 Aug 5 22:22:02.538475 kernel: block loop0: the capability attribute has been deprecated. Aug 5 22:22:02.538690 systemd-journald[1252]: /var/log/journal/24a0034a73574cdcbde06bd356043db2/system.journal: Realtime clock jumped backwards relative to last journal entry, rotating. Aug 5 22:22:02.538746 systemd-journald[1252]: Rotating system journal. Aug 5 22:22:02.301846 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:22:02.302223 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:22:02.305901 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 5 22:22:02.308980 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 5 22:22:02.312892 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 5 22:22:02.318006 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 5 22:22:02.329520 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 5 22:22:02.334526 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 5 22:22:02.348715 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Aug 5 22:22:02.353386 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 5 22:22:02.367214 udevadm[1296]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Aug 5 22:22:02.446089 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 5 22:22:02.454655 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 5 22:22:02.502778 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 5 22:22:02.524294 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Aug 5 22:22:02.524316 systemd-tmpfiles[1308]: ACLs are not supported, ignoring. Aug 5 22:22:02.530878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 5 22:22:02.540333 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 5 22:22:02.547778 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 5 22:22:02.548890 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Aug 5 22:22:02.570472 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 5 22:22:02.605479 kernel: loop1: detected capacity change from 0 to 139760 Aug 5 22:22:02.721480 kernel: loop2: detected capacity change from 0 to 209816 Aug 5 22:22:02.753563 kernel: loop3: detected capacity change from 0 to 80568 Aug 5 22:22:02.859485 kernel: loop4: detected capacity change from 0 to 62456 Aug 5 22:22:02.869483 kernel: loop5: detected capacity change from 0 to 139760 Aug 5 22:22:02.888478 kernel: loop6: detected capacity change from 0 to 209816 Aug 5 22:22:02.902484 kernel: loop7: detected capacity change from 0 to 80568 Aug 5 22:22:02.910725 (sd-merge)[1324]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Aug 5 22:22:02.911309 (sd-merge)[1324]: Merged extensions into '/usr'. Aug 5 22:22:02.917125 systemd[1]: Reloading requested from client PID 1294 ('systemd-sysext') (unit systemd-sysext.service)... Aug 5 22:22:02.917141 systemd[1]: Reloading... Aug 5 22:22:03.007479 zram_generator::config[1345]: No configuration found. Aug 5 22:22:03.226684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:03.320238 systemd[1]: Reloading finished in 402 ms. Aug 5 22:22:03.351468 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 5 22:22:03.361615 systemd[1]: Starting ensure-sysext.service... Aug 5 22:22:03.375624 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Aug 5 22:22:03.390825 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 5 22:22:03.391259 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 5 22:22:03.392052 systemd-tmpfiles[1407]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 5 22:22:03.392329 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Aug 5 22:22:03.392423 systemd-tmpfiles[1407]: ACLs are not supported, ignoring. Aug 5 22:22:03.526275 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:22:03.526293 systemd-tmpfiles[1407]: Skipping /boot Aug 5 22:22:03.534325 systemd[1]: Reloading requested from client PID 1406 ('systemctl') (unit ensure-sysext.service)... Aug 5 22:22:03.534345 systemd[1]: Reloading... Aug 5 22:22:03.552131 systemd-tmpfiles[1407]: Detected autofs mount point /boot during canonicalization of boot. Aug 5 22:22:03.552144 systemd-tmpfiles[1407]: Skipping /boot Aug 5 22:22:03.637487 zram_generator::config[1436]: No configuration found. Aug 5 22:22:03.753852 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:03.821700 systemd[1]: Reloading finished in 286 ms. Aug 5 22:22:03.836576 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 5 22:22:03.844920 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Aug 5 22:22:03.859604 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:22:03.867025 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 5 22:22:03.873763 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 5 22:22:03.894599 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 5 22:22:03.899541 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 5 22:22:03.907986 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 5 22:22:03.921831 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 5 22:22:03.927868 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:03.930570 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:03.940250 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 5 22:22:03.950942 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 5 22:22:03.970347 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:22:03.972768 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:03.972927 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:03.983360 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:03.983709 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:03.983998 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:03.984198 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:03.986988 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 5 22:22:03.993864 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:22:03.994205 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:22:04.015020 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 5 22:22:04.015369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 5 22:22:04.022648 systemd[1]: Expecting device dev-ptp_hyperv.device - /dev/ptp_hyperv... Aug 5 22:22:04.025645 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:04.026148 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 5 22:22:04.034762 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 5 22:22:04.040763 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 5 22:22:04.047970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 5 22:22:04.048242 systemd[1]: Reached target time-set.target - System Time Set. Aug 5 22:22:04.050693 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 5 22:22:04.053798 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 5 22:22:04.062083 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 5 22:22:04.066919 systemd-udevd[1505]: Using default interface naming scheme 'v255'. Aug 5 22:22:04.068731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 5 22:22:04.069179 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 5 22:22:04.074900 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 5 22:22:04.075083 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 5 22:22:04.081110 augenrules[1528]: No rules Aug 5 22:22:04.080116 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 5 22:22:04.080274 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 5 22:22:04.083759 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:22:04.095380 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 5 22:22:04.095483 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 5 22:22:04.096165 systemd[1]: Finished ensure-sysext.service. Aug 5 22:22:04.127070 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 5 22:22:04.144692 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 5 22:22:04.161115 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 5 22:22:04.170363 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 5 22:22:04.194445 systemd-resolved[1504]: Positive Trust Anchors: Aug 5 22:22:04.194472 systemd-resolved[1504]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 5 22:22:04.194535 systemd-resolved[1504]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa corp home internal intranet lan local private test Aug 5 22:22:04.211357 systemd-resolved[1504]: Using system hostname 'ci-4012.1.0-a-bfd2eb4520'. Aug 5 22:22:04.215673 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 5 22:22:04.218929 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 5 22:22:04.275848 systemd-networkd[1547]: lo: Link UP Aug 5 22:22:04.275861 systemd-networkd[1547]: lo: Gained carrier Aug 5 22:22:04.278167 systemd-networkd[1547]: Enumeration completed Aug 5 22:22:04.278277 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 5 22:22:04.289485 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1543) Aug 5 22:22:04.289684 systemd[1]: Reached target network.target - Network. Aug 5 22:22:04.302200 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 5 22:22:04.329952 systemd-networkd[1547]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:04.329966 systemd-networkd[1547]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:22:04.361189 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 5 22:22:04.402445 kernel: mlx5_core a020:00:02.0 enP40992s1: Link up Aug 5 22:22:04.408701 kernel: hv_vmbus: registering driver hyperv_fb Aug 5 22:22:04.413951 kernel: mousedev: PS/2 mouse device common for all mice Aug 5 22:22:04.414016 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Aug 5 22:22:04.417228 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Aug 5 22:22:04.418567 kernel: hv_netvsc 7c1e5209-ebe5-7c1e-5209-ebe57c1e5209 eth0: Data path switched to VF: enP40992s1 Aug 5 22:22:04.424430 kernel: hv_vmbus: registering driver hv_balloon Aug 5 22:22:04.424487 kernel: Console: switching to colour dummy device 80x25 Aug 5 22:22:04.428486 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Aug 5 22:22:04.428532 kernel: Console: switching to colour frame buffer device 128x48 Aug 5 22:22:04.432119 systemd-networkd[1547]: enP40992s1: Link UP Aug 5 22:22:04.434719 systemd-networkd[1547]: eth0: Link UP Aug 5 22:22:04.434727 systemd-networkd[1547]: eth0: Gained carrier Aug 5 22:22:04.434754 systemd-networkd[1547]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:04.439836 systemd-networkd[1547]: enP40992s1: Gained carrier Aug 5 22:22:04.464180 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1546) Aug 5 22:22:04.464588 systemd-networkd[1547]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 5 22:22:04.573098 ldconfig[1290]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 5 22:22:04.589038 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 5 22:22:04.603630 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 5 22:22:04.620183 systemd[1]: Condition check resulted in dev-ptp_hyperv.device - /dev/ptp_hyperv being skipped. Aug 5 22:22:04.643519 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 5 22:22:04.687795 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:22:04.728719 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 5 22:22:04.728948 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:04.746683 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 5 22:22:04.796901 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Aug 5 22:22:04.817110 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 5 22:22:04.847995 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 5 22:22:04.873471 kernel: kvm_intel: Using Hyper-V Enlightened VMCS Aug 5 22:22:04.904506 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Aug 5 22:22:04.910657 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Aug 5 22:22:04.936095 lvm[1632]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:22:04.948179 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 5 22:22:04.963418 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Aug 5 22:22:04.966569 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 5 22:22:04.972070 systemd[1]: Reached target sysinit.target - System Initialization. Aug 5 22:22:04.974531 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 5 22:22:04.977295 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 5 22:22:04.980109 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 5 22:22:04.982541 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 5 22:22:04.985367 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 5 22:22:04.988223 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 5 22:22:04.988262 systemd[1]: Reached target paths.target - Path Units. Aug 5 22:22:04.990212 systemd[1]: Reached target timers.target - Timer Units. Aug 5 22:22:04.996344 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 5 22:22:05.000367 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 5 22:22:05.010298 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 5 22:22:05.013781 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Aug 5 22:22:05.017085 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 5 22:22:05.019830 systemd[1]: Reached target sockets.target - Socket Units. Aug 5 22:22:05.021922 systemd[1]: Reached target basic.target - Basic System. Aug 5 22:22:05.023957 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:22:05.023991 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 5 22:22:05.038584 systemd[1]: Starting chronyd.service - NTP client/server... Aug 5 22:22:05.045588 systemd[1]: Starting containerd.service - containerd container runtime... Aug 5 22:22:05.056948 lvm[1639]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Aug 5 22:22:05.058598 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 5 22:22:05.062160 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 5 22:22:05.066576 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 5 22:22:05.070669 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 5 22:22:05.074522 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 5 22:22:05.079633 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 5 22:22:05.085586 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 5 22:22:05.095592 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 5 22:22:05.099913 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 5 22:22:05.114619 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 5 22:22:05.115720 (chronyd)[1640]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS Aug 5 22:22:05.122417 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 5 22:22:05.123031 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 5 22:22:05.133636 systemd[1]: Starting update-engine.service - Update Engine... Aug 5 22:22:05.136401 chronyd[1656]: chronyd version 4.5 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) Aug 5 22:22:05.142052 dbus-daemon[1643]: [system] SELinux support is enabled Aug 5 22:22:05.144971 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 5 22:22:05.147358 chronyd[1656]: Timezone right/UTC failed leap second check, ignoring Aug 5 22:22:05.147603 chronyd[1656]: Loaded seccomp filter (level 2) Aug 5 22:22:05.161086 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 5 22:22:05.172505 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Aug 5 22:22:05.174470 jq[1644]: false Aug 5 22:22:05.179397 systemd[1]: Started chronyd.service - NTP client/server. Aug 5 22:22:05.181485 extend-filesystems[1645]: Found loop4 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found loop5 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found loop6 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found loop7 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda1 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda2 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda3 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found usr Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda4 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda6 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda7 Aug 5 22:22:05.183547 extend-filesystems[1645]: Found sda9 Aug 5 22:22:05.183547 extend-filesystems[1645]: Checking size of /dev/sda9 Aug 5 22:22:05.194365 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 5 22:22:05.265576 update_engine[1653]: I0805 22:22:05.261430 1653 main.cc:92] Flatcar Update Engine starting Aug 5 22:22:05.265814 jq[1657]: true Aug 5 22:22:05.198089 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 5 22:22:05.208926 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 5 22:22:05.209135 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 5 22:22:05.266375 jq[1675]: true Aug 5 22:22:05.262498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 5 22:22:05.276725 extend-filesystems[1645]: Old size kept for /dev/sda9 Aug 5 22:22:05.276725 extend-filesystems[1645]: Found sr0 Aug 5 22:22:05.262549 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 5 22:22:05.266581 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 5 22:22:05.266605 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 5 22:22:05.272744 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 5 22:22:05.273525 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 5 22:22:05.274093 (ntainerd)[1681]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 5 22:22:05.281661 systemd[1]: motdgen.service: Deactivated successfully. Aug 5 22:22:05.281883 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 5 22:22:05.283270 update_engine[1653]: I0805 22:22:05.283236 1653 update_check_scheduler.cc:74] Next update check in 9m37s Aug 5 22:22:05.287263 systemd[1]: Started update-engine.service - Update Engine. Aug 5 22:22:05.297770 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 5 22:22:05.315506 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (1546) Aug 5 22:22:05.336717 systemd-logind[1650]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 5 22:22:05.337643 systemd-logind[1650]: New seat seat0. Aug 5 22:22:05.338465 systemd[1]: Started systemd-logind.service - User Login Management. Aug 5 22:22:05.421817 coreos-metadata[1642]: Aug 05 22:22:05.421 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Aug 5 22:22:05.435633 coreos-metadata[1642]: Aug 05 22:22:05.425 INFO Fetch successful Aug 5 22:22:05.435633 coreos-metadata[1642]: Aug 05 22:22:05.425 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Aug 5 22:22:05.435633 coreos-metadata[1642]: Aug 05 22:22:05.434 INFO Fetch successful Aug 5 22:22:05.435633 coreos-metadata[1642]: Aug 05 22:22:05.434 INFO Fetching http://168.63.129.16/machine/82357370-8978-4d43-af99-4a7e07f162d9/484ec495%2D5bf8%2D4468%2Da1b6%2D38c68d89393b.%5Fci%2D4012.1.0%2Da%2Dbfd2eb4520?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Aug 5 22:22:05.437959 coreos-metadata[1642]: Aug 05 22:22:05.437 INFO Fetch successful Aug 5 22:22:05.438232 coreos-metadata[1642]: Aug 05 22:22:05.438 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Aug 5 22:22:05.449005 coreos-metadata[1642]: Aug 05 22:22:05.448 INFO Fetch successful Aug 5 22:22:05.474446 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 5 22:22:05.478675 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 5 22:22:05.482504 bash[1708]: Updated "/home/core/.ssh/authorized_keys" Aug 5 22:22:05.483555 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 5 22:22:05.500422 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Aug 5 22:22:05.536818 tar[1667]: linux-amd64/helm Aug 5 22:22:05.601959 locksmithd[1707]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 5 22:22:05.870888 systemd-networkd[1547]: eth0: Gained IPv6LL Aug 5 22:22:05.874817 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 5 22:22:05.880292 systemd[1]: Reached target network-online.target - Network is Online. Aug 5 22:22:05.897680 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:05.906762 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 5 22:22:05.984371 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 5 22:22:06.062686 systemd-networkd[1547]: enP40992s1: Gained IPv6LL Aug 5 22:22:06.130067 containerd[1681]: time="2024-08-05T22:22:06.129828800Z" level=info msg="starting containerd" revision=cd7148ac666309abf41fd4a49a8a5895b905e7f3 version=v1.7.18 Aug 5 22:22:06.216269 containerd[1681]: time="2024-08-05T22:22:06.214669800Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Aug 5 22:22:06.216269 containerd[1681]: time="2024-08-05T22:22:06.214732400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222212100Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.43-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222254700Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222558400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222585200Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222688800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222748700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222765600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.222836100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.223049100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.223069900Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Aug 5 22:22:06.223123 containerd[1681]: time="2024-08-05T22:22:06.223085000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Aug 5 22:22:06.225158 containerd[1681]: time="2024-08-05T22:22:06.224904800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Aug 5 22:22:06.225158 containerd[1681]: time="2024-08-05T22:22:06.224933900Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Aug 5 22:22:06.225158 containerd[1681]: time="2024-08-05T22:22:06.225012600Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Aug 5 22:22:06.225158 containerd[1681]: time="2024-08-05T22:22:06.225027000Z" level=info msg="metadata content store policy set" policy=shared Aug 5 22:22:06.238688 containerd[1681]: time="2024-08-05T22:22:06.238650900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Aug 5 22:22:06.238774 containerd[1681]: time="2024-08-05T22:22:06.238696800Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Aug 5 22:22:06.238774 containerd[1681]: time="2024-08-05T22:22:06.238715000Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Aug 5 22:22:06.238774 containerd[1681]: time="2024-08-05T22:22:06.238751800Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Aug 5 22:22:06.238774 containerd[1681]: time="2024-08-05T22:22:06.238770500Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Aug 5 22:22:06.238774 containerd[1681]: time="2024-08-05T22:22:06.238783700Z" level=info msg="NRI interface is disabled by configuration." Aug 5 22:22:06.238951 containerd[1681]: time="2024-08-05T22:22:06.238801800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Aug 5 22:22:06.238987 containerd[1681]: time="2024-08-05T22:22:06.238950800Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Aug 5 22:22:06.238987 containerd[1681]: time="2024-08-05T22:22:06.238973600Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Aug 5 22:22:06.239050 containerd[1681]: time="2024-08-05T22:22:06.238992800Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Aug 5 22:22:06.239050 containerd[1681]: time="2024-08-05T22:22:06.239011900Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Aug 5 22:22:06.239050 containerd[1681]: time="2024-08-05T22:22:06.239032100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239150 containerd[1681]: time="2024-08-05T22:22:06.239058200Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239150 containerd[1681]: time="2024-08-05T22:22:06.239078700Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239150 containerd[1681]: time="2024-08-05T22:22:06.239098700Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239150 containerd[1681]: time="2024-08-05T22:22:06.239119200Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239150 containerd[1681]: time="2024-08-05T22:22:06.239138300Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239304 containerd[1681]: time="2024-08-05T22:22:06.239168500Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.239304 containerd[1681]: time="2024-08-05T22:22:06.239191000Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Aug 5 22:22:06.239365 containerd[1681]: time="2024-08-05T22:22:06.239312500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Aug 5 22:22:06.239966 containerd[1681]: time="2024-08-05T22:22:06.239941400Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Aug 5 22:22:06.240146 containerd[1681]: time="2024-08-05T22:22:06.240127600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.240604 containerd[1681]: time="2024-08-05T22:22:06.240580600Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Aug 5 22:22:06.240711 containerd[1681]: time="2024-08-05T22:22:06.240696200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Aug 5 22:22:06.241139 containerd[1681]: time="2024-08-05T22:22:06.241119100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.241224 containerd[1681]: time="2024-08-05T22:22:06.241210500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.241326 containerd[1681]: time="2024-08-05T22:22:06.241273600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.241403 containerd[1681]: time="2024-08-05T22:22:06.241389500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.241493 containerd[1681]: time="2024-08-05T22:22:06.241478800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241553600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241581500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241599600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241619000Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241761000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241781700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241815800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241834000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241852400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241873100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241892300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.242773 containerd[1681]: time="2024-08-05T22:22:06.241909100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Aug 5 22:22:06.243207 containerd[1681]: time="2024-08-05T22:22:06.242270800Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Aug 5 22:22:06.243207 containerd[1681]: time="2024-08-05T22:22:06.242346700Z" level=info msg="Connect containerd service" Aug 5 22:22:06.243207 containerd[1681]: time="2024-08-05T22:22:06.242387500Z" level=info msg="using legacy CRI server" Aug 5 22:22:06.243207 containerd[1681]: time="2024-08-05T22:22:06.242396500Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 5 22:22:06.243207 containerd[1681]: time="2024-08-05T22:22:06.242521400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.245923000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.245981000Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246005900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246021400Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246038400Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246065500Z" level=info msg="Start subscribing containerd event" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246111900Z" level=info msg="Start recovering state" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246187300Z" level=info msg="Start event monitor" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246204900Z" level=info msg="Start snapshots syncer" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246217200Z" level=info msg="Start cni network conf syncer for default" Aug 5 22:22:06.246524 containerd[1681]: time="2024-08-05T22:22:06.246228000Z" level=info msg="Start streaming server" Aug 5 22:22:06.247256 containerd[1681]: time="2024-08-05T22:22:06.247050400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 5 22:22:06.247256 containerd[1681]: time="2024-08-05T22:22:06.247115800Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 5 22:22:06.247256 containerd[1681]: time="2024-08-05T22:22:06.247182400Z" level=info msg="containerd successfully booted in 0.119636s" Aug 5 22:22:06.247471 systemd[1]: Started containerd.service - containerd container runtime. Aug 5 22:22:06.325694 sshd_keygen[1689]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 5 22:22:06.384495 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 5 22:22:06.396444 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 5 22:22:06.405657 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Aug 5 22:22:06.419856 systemd[1]: issuegen.service: Deactivated successfully. Aug 5 22:22:06.420056 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 5 22:22:06.432724 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 5 22:22:06.433556 tar[1667]: linux-amd64/LICENSE Aug 5 22:22:06.433556 tar[1667]: linux-amd64/README.md Aug 5 22:22:06.457557 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 5 22:22:06.465803 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 5 22:22:06.469747 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 5 22:22:06.472535 systemd[1]: Reached target getty.target - Login Prompts. Aug 5 22:22:06.475346 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 5 22:22:06.484660 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Aug 5 22:22:06.901904 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:06.905638 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 5 22:22:06.907499 (kubelet)[1794]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:06.908571 systemd[1]: Startup finished in 604ms (firmware) + 7.409s (loader) + 977ms (kernel) + 8.912s (initrd) + 6.346s (userspace) = 24.250s. Aug 5 22:22:07.170606 login[1783]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 22:22:07.175250 login[1785]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Aug 5 22:22:07.186861 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 5 22:22:07.194343 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 5 22:22:07.200669 systemd-logind[1650]: New session 2 of user core. Aug 5 22:22:07.210392 systemd-logind[1650]: New session 1 of user core. Aug 5 22:22:07.216271 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 5 22:22:07.226551 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 5 22:22:07.235006 (systemd)[1806]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:22:07.436055 systemd[1806]: Queued start job for default target default.target. Aug 5 22:22:07.442519 systemd[1806]: Created slice app.slice - User Application Slice. Aug 5 22:22:07.442682 systemd[1806]: Reached target paths.target - Paths. Aug 5 22:22:07.442769 systemd[1806]: Reached target timers.target - Timers. Aug 5 22:22:07.446633 systemd[1806]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 5 22:22:07.455311 waagent[1786]: 2024-08-05T22:22:07.455197Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Aug 5 22:22:07.459873 waagent[1786]: 2024-08-05T22:22:07.459257Z INFO Daemon Daemon OS: flatcar 4012.1.0 Aug 5 22:22:07.462550 systemd[1806]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 5 22:22:07.462758 systemd[1806]: Reached target sockets.target - Sockets. Aug 5 22:22:07.462856 systemd[1806]: Reached target basic.target - Basic System. Aug 5 22:22:07.462900 systemd[1806]: Reached target default.target - Main User Target. Aug 5 22:22:07.462931 systemd[1806]: Startup finished in 219ms. Aug 5 22:22:07.463136 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 5 22:22:07.473598 waagent[1786]: 2024-08-05T22:22:07.463412Z INFO Daemon Daemon Python: 3.11.9 Aug 5 22:22:07.473598 waagent[1786]: 2024-08-05T22:22:07.465655Z INFO Daemon Daemon Run daemon Aug 5 22:22:07.473598 waagent[1786]: 2024-08-05T22:22:07.467599Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4012.1.0' Aug 5 22:22:07.473598 waagent[1786]: 2024-08-05T22:22:07.471182Z INFO Daemon Daemon Using waagent for provisioning Aug 5 22:22:07.471131 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 5 22:22:07.473811 waagent[1786]: 2024-08-05T22:22:07.473676Z INFO Daemon Daemon Activate resource disk Aug 5 22:22:07.473319 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 5 22:22:07.476654 waagent[1786]: 2024-08-05T22:22:07.475721Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Aug 5 22:22:07.484331 waagent[1786]: 2024-08-05T22:22:07.484284Z INFO Daemon Daemon Found device: None Aug 5 22:22:07.484730 waagent[1786]: 2024-08-05T22:22:07.484688Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Aug 5 22:22:07.485433 waagent[1786]: 2024-08-05T22:22:07.485397Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Aug 5 22:22:07.488472 waagent[1786]: 2024-08-05T22:22:07.487727Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 5 22:22:07.488472 waagent[1786]: 2024-08-05T22:22:07.488404Z INFO Daemon Daemon Running default provisioning handler Aug 5 22:22:07.507139 waagent[1786]: 2024-08-05T22:22:07.506613Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. Aug 5 22:22:07.516471 waagent[1786]: 2024-08-05T22:22:07.513988Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Aug 5 22:22:07.518777 waagent[1786]: 2024-08-05T22:22:07.518406Z INFO Daemon Daemon cloud-init is enabled: False Aug 5 22:22:07.520848 waagent[1786]: 2024-08-05T22:22:07.520777Z INFO Daemon Daemon Copying ovf-env.xml Aug 5 22:22:07.576170 waagent[1786]: 2024-08-05T22:22:07.575574Z INFO Daemon Daemon Successfully mounted dvd Aug 5 22:22:07.601241 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Aug 5 22:22:07.604322 waagent[1786]: 2024-08-05T22:22:07.604216Z INFO Daemon Daemon Detect protocol endpoint Aug 5 22:22:07.608533 waagent[1786]: 2024-08-05T22:22:07.606751Z INFO Daemon Daemon Clean protocol and wireserver endpoint Aug 5 22:22:07.608533 waagent[1786]: 2024-08-05T22:22:07.607019Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Aug 5 22:22:07.608533 waagent[1786]: 2024-08-05T22:22:07.607813Z INFO Daemon Daemon Test for route to 168.63.129.16 Aug 5 22:22:07.609079 waagent[1786]: 2024-08-05T22:22:07.609037Z INFO Daemon Daemon Route to 168.63.129.16 exists Aug 5 22:22:07.609656 waagent[1786]: 2024-08-05T22:22:07.609617Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Aug 5 22:22:07.626671 waagent[1786]: 2024-08-05T22:22:07.626613Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Aug 5 22:22:07.627271 waagent[1786]: 2024-08-05T22:22:07.627239Z INFO Daemon Daemon Wire protocol version:2012-11-30 Aug 5 22:22:07.628353 waagent[1786]: 2024-08-05T22:22:07.628316Z INFO Daemon Daemon Server preferred version:2015-04-05 Aug 5 22:22:07.696584 waagent[1786]: 2024-08-05T22:22:07.696416Z INFO Daemon Daemon Initializing goal state during protocol detection Aug 5 22:22:07.697041 waagent[1786]: 2024-08-05T22:22:07.696978Z INFO Daemon Daemon Forcing an update of the goal state. Aug 5 22:22:07.701562 waagent[1786]: 2024-08-05T22:22:07.701512Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 5 22:22:07.713578 waagent[1786]: 2024-08-05T22:22:07.713528Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.154 Aug 5 22:22:07.714260 waagent[1786]: 2024-08-05T22:22:07.714217Z INFO Daemon Aug 5 22:22:07.714966 waagent[1786]: 2024-08-05T22:22:07.714930Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 76d2c8dc-af44-430a-8cff-fbe474bb6607 eTag: 6260590792636635209 source: Fabric] Aug 5 22:22:07.715922 waagent[1786]: 2024-08-05T22:22:07.715883Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Aug 5 22:22:07.716953 waagent[1786]: 2024-08-05T22:22:07.716911Z INFO Daemon Aug 5 22:22:07.717681 waagent[1786]: 2024-08-05T22:22:07.717644Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Aug 5 22:22:07.729396 waagent[1786]: 2024-08-05T22:22:07.729367Z INFO Daemon Daemon Downloading artifacts profile blob Aug 5 22:22:07.825280 waagent[1786]: 2024-08-05T22:22:07.824338Z INFO Daemon Downloaded certificate {'thumbprint': 'EB76ED806612B0B6F212F8AF21A6490904889673', 'hasPrivateKey': True} Aug 5 22:22:07.829550 waagent[1786]: 2024-08-05T22:22:07.829389Z INFO Daemon Downloaded certificate {'thumbprint': '33CCAB32062B5D835E41D4BC5297358250B667D4', 'hasPrivateKey': False} Aug 5 22:22:07.833865 waagent[1786]: 2024-08-05T22:22:07.833806Z INFO Daemon Fetch goal state completed Aug 5 22:22:07.842932 waagent[1786]: 2024-08-05T22:22:07.842882Z INFO Daemon Daemon Starting provisioning Aug 5 22:22:07.845815 waagent[1786]: 2024-08-05T22:22:07.845221Z INFO Daemon Daemon Handle ovf-env.xml. Aug 5 22:22:07.847494 waagent[1786]: 2024-08-05T22:22:07.847173Z INFO Daemon Daemon Set hostname [ci-4012.1.0-a-bfd2eb4520] Aug 5 22:22:07.853623 waagent[1786]: 2024-08-05T22:22:07.852121Z INFO Daemon Daemon Publish hostname [ci-4012.1.0-a-bfd2eb4520] Aug 5 22:22:07.853623 waagent[1786]: 2024-08-05T22:22:07.852428Z INFO Daemon Daemon Examine /proc/net/route for primary interface Aug 5 22:22:07.853623 waagent[1786]: 2024-08-05T22:22:07.853367Z INFO Daemon Daemon Primary interface is [eth0] Aug 5 22:22:07.868187 systemd-networkd[1547]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 5 22:22:07.868202 systemd-networkd[1547]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 5 22:22:07.868249 systemd-networkd[1547]: eth0: DHCP lease lost Aug 5 22:22:07.869746 waagent[1786]: 2024-08-05T22:22:07.869672Z INFO Daemon Daemon Create user account if not exists Aug 5 22:22:07.883560 waagent[1786]: 2024-08-05T22:22:07.870188Z INFO Daemon Daemon User core already exists, skip useradd Aug 5 22:22:07.883560 waagent[1786]: 2024-08-05T22:22:07.871019Z INFO Daemon Daemon Configure sudoer Aug 5 22:22:07.883560 waagent[1786]: 2024-08-05T22:22:07.872018Z INFO Daemon Daemon Configure sshd Aug 5 22:22:07.883560 waagent[1786]: 2024-08-05T22:22:07.872997Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Aug 5 22:22:07.883560 waagent[1786]: 2024-08-05T22:22:07.873192Z INFO Daemon Daemon Deploy ssh public key. Aug 5 22:22:07.886089 systemd-networkd[1547]: eth0: DHCPv6 lease lost Aug 5 22:22:07.899479 kubelet[1794]: E0805 22:22:07.898164 1794 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:07.900677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:07.900850 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:07.901369 systemd[1]: kubelet.service: Consumed 1.012s CPU time. Aug 5 22:22:07.908547 systemd-networkd[1547]: eth0: DHCPv4 address 10.200.4.17/24, gateway 10.200.4.1 acquired from 168.63.129.16 Aug 5 22:22:18.151127 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 5 22:22:18.156991 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:18.251395 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:18.256168 (kubelet)[1864]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:18.811394 kubelet[1864]: E0805 22:22:18.811291 1864 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:18.815520 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:18.815738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:28.940321 chronyd[1656]: Selected source PHC0 Aug 5 22:22:29.066432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 5 22:22:29.072682 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:29.382801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:29.387269 (kubelet)[1880]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:29.714663 kubelet[1880]: E0805 22:22:29.714540 1880 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:29.717317 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:29.717541 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:38.159732 waagent[1786]: 2024-08-05T22:22:38.159662Z INFO Daemon Daemon Provisioning complete Aug 5 22:22:38.174439 waagent[1786]: 2024-08-05T22:22:38.174373Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Aug 5 22:22:38.180105 waagent[1786]: 2024-08-05T22:22:38.175732Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Aug 5 22:22:38.180105 waagent[1786]: 2024-08-05T22:22:38.177563Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Aug 5 22:22:38.302759 waagent[1888]: 2024-08-05T22:22:38.302666Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Aug 5 22:22:38.303221 waagent[1888]: 2024-08-05T22:22:38.302829Z INFO ExtHandler ExtHandler OS: flatcar 4012.1.0 Aug 5 22:22:38.303221 waagent[1888]: 2024-08-05T22:22:38.302913Z INFO ExtHandler ExtHandler Python: 3.11.9 Aug 5 22:22:38.320901 waagent[1888]: 2024-08-05T22:22:38.320827Z INFO ExtHandler ExtHandler Distro: flatcar-4012.1.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.9; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Aug 5 22:22:38.321095 waagent[1888]: 2024-08-05T22:22:38.321051Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 22:22:38.321187 waagent[1888]: 2024-08-05T22:22:38.321146Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 22:22:38.328282 waagent[1888]: 2024-08-05T22:22:38.328216Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Aug 5 22:22:38.341748 waagent[1888]: 2024-08-05T22:22:38.341694Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.154 Aug 5 22:22:38.342239 waagent[1888]: 2024-08-05T22:22:38.342183Z INFO ExtHandler Aug 5 22:22:38.342331 waagent[1888]: 2024-08-05T22:22:38.342279Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6d785ad3-c508-46df-85af-1e550bca39ca eTag: 6260590792636635209 source: Fabric] Aug 5 22:22:38.342668 waagent[1888]: 2024-08-05T22:22:38.342615Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Aug 5 22:22:38.343255 waagent[1888]: 2024-08-05T22:22:38.343198Z INFO ExtHandler Aug 5 22:22:38.343335 waagent[1888]: 2024-08-05T22:22:38.343284Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Aug 5 22:22:38.346555 waagent[1888]: 2024-08-05T22:22:38.346512Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Aug 5 22:22:38.413640 waagent[1888]: 2024-08-05T22:22:38.413510Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EB76ED806612B0B6F212F8AF21A6490904889673', 'hasPrivateKey': True} Aug 5 22:22:38.414014 waagent[1888]: 2024-08-05T22:22:38.413956Z INFO ExtHandler Downloaded certificate {'thumbprint': '33CCAB32062B5D835E41D4BC5297358250B667D4', 'hasPrivateKey': False} Aug 5 22:22:38.414484 waagent[1888]: 2024-08-05T22:22:38.414421Z INFO ExtHandler Fetch goal state completed Aug 5 22:22:38.429552 waagent[1888]: 2024-08-05T22:22:38.429492Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1888 Aug 5 22:22:38.429706 waagent[1888]: 2024-08-05T22:22:38.429660Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Aug 5 22:22:38.431242 waagent[1888]: 2024-08-05T22:22:38.431183Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '4012.1.0', '', 'Flatcar Container Linux by Kinvolk'] Aug 5 22:22:38.431637 waagent[1888]: 2024-08-05T22:22:38.431587Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Aug 5 22:22:38.439153 waagent[1888]: 2024-08-05T22:22:38.439116Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Aug 5 22:22:38.439343 waagent[1888]: 2024-08-05T22:22:38.439300Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Aug 5 22:22:38.445994 waagent[1888]: 2024-08-05T22:22:38.445954Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Aug 5 22:22:38.453142 systemd[1]: Reloading requested from client PID 1903 ('systemctl') (unit waagent.service)... Aug 5 22:22:38.453158 systemd[1]: Reloading... Aug 5 22:22:38.542562 zram_generator::config[1937]: No configuration found. Aug 5 22:22:38.664668 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:38.744181 systemd[1]: Reloading finished in 290 ms. Aug 5 22:22:38.768868 waagent[1888]: 2024-08-05T22:22:38.768325Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Aug 5 22:22:38.776973 systemd[1]: Reloading requested from client PID 1991 ('systemctl') (unit waagent.service)... Aug 5 22:22:38.777139 systemd[1]: Reloading... Aug 5 22:22:38.851529 zram_generator::config[2019]: No configuration found. Aug 5 22:22:38.984005 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:22:39.068711 systemd[1]: Reloading finished in 291 ms. Aug 5 22:22:39.094834 waagent[1888]: 2024-08-05T22:22:39.093693Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Aug 5 22:22:39.094834 waagent[1888]: 2024-08-05T22:22:39.093927Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Aug 5 22:22:39.212490 waagent[1888]: 2024-08-05T22:22:39.212392Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Aug 5 22:22:39.213077 waagent[1888]: 2024-08-05T22:22:39.213013Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Aug 5 22:22:39.213910 waagent[1888]: 2024-08-05T22:22:39.213838Z INFO ExtHandler ExtHandler Starting env monitor service. Aug 5 22:22:39.214010 waagent[1888]: 2024-08-05T22:22:39.213973Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 22:22:39.214140 waagent[1888]: 2024-08-05T22:22:39.214080Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 22:22:39.214780 waagent[1888]: 2024-08-05T22:22:39.214724Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Aug 5 22:22:39.215015 waagent[1888]: 2024-08-05T22:22:39.214960Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Aug 5 22:22:39.215367 waagent[1888]: 2024-08-05T22:22:39.215314Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Aug 5 22:22:39.215601 waagent[1888]: 2024-08-05T22:22:39.215551Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Aug 5 22:22:39.215711 waagent[1888]: 2024-08-05T22:22:39.215670Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Aug 5 22:22:39.215888 waagent[1888]: 2024-08-05T22:22:39.215843Z INFO EnvHandler ExtHandler Configure routes Aug 5 22:22:39.215972 waagent[1888]: 2024-08-05T22:22:39.215933Z INFO EnvHandler ExtHandler Gateway:None Aug 5 22:22:39.216048 waagent[1888]: 2024-08-05T22:22:39.216012Z INFO EnvHandler ExtHandler Routes:None Aug 5 22:22:39.216723 waagent[1888]: 2024-08-05T22:22:39.216674Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Aug 5 22:22:39.216723 waagent[1888]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Aug 5 22:22:39.216723 waagent[1888]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 Aug 5 22:22:39.216723 waagent[1888]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Aug 5 22:22:39.216723 waagent[1888]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Aug 5 22:22:39.216723 waagent[1888]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 5 22:22:39.216723 waagent[1888]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Aug 5 22:22:39.217246 waagent[1888]: 2024-08-05T22:22:39.217021Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Aug 5 22:22:39.218474 waagent[1888]: 2024-08-05T22:22:39.217516Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Aug 5 22:22:39.218474 waagent[1888]: 2024-08-05T22:22:39.217644Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Aug 5 22:22:39.219059 waagent[1888]: 2024-08-05T22:22:39.219009Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Aug 5 22:22:39.229779 waagent[1888]: 2024-08-05T22:22:39.229731Z INFO ExtHandler ExtHandler Aug 5 22:22:39.229879 waagent[1888]: 2024-08-05T22:22:39.229840Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: db74ad9e-3d79-46b9-8b28-76658f587d22 correlation 9ea99bd4-6ca4-433c-8615-b3a21815a209 created: 2024-08-05T22:21:21.077019Z] Aug 5 22:22:39.230351 waagent[1888]: 2024-08-05T22:22:39.230301Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Aug 5 22:22:39.233829 waagent[1888]: 2024-08-05T22:22:39.231110Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 1 ms] Aug 5 22:22:39.239972 waagent[1888]: 2024-08-05T22:22:39.239885Z INFO MonitorHandler ExtHandler Network interfaces: Aug 5 22:22:39.239972 waagent[1888]: Executing ['ip', '-a', '-o', 'link']: Aug 5 22:22:39.239972 waagent[1888]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Aug 5 22:22:39.239972 waagent[1888]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:09:eb:e5 brd ff:ff:ff:ff:ff:ff Aug 5 22:22:39.239972 waagent[1888]: 3: enP40992s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:09:eb:e5 brd ff:ff:ff:ff:ff:ff\ altname enP40992p0s2 Aug 5 22:22:39.239972 waagent[1888]: Executing ['ip', '-4', '-a', '-o', 'address']: Aug 5 22:22:39.239972 waagent[1888]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Aug 5 22:22:39.239972 waagent[1888]: 2: eth0 inet 10.200.4.17/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever Aug 5 22:22:39.239972 waagent[1888]: Executing ['ip', '-6', '-a', '-o', 'address']: Aug 5 22:22:39.239972 waagent[1888]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever Aug 5 22:22:39.239972 waagent[1888]: 2: eth0 inet6 fe80::7e1e:52ff:fe09:ebe5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 5 22:22:39.239972 waagent[1888]: 3: enP40992s1 inet6 fe80::7e1e:52ff:fe09:ebe5/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Aug 5 22:22:39.273512 waagent[1888]: 2024-08-05T22:22:39.273427Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 0661F847-C723-4369-9CB4-6C1188CFF8F6;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Aug 5 22:22:39.280492 waagent[1888]: 2024-08-05T22:22:39.280391Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Aug 5 22:22:39.280492 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:22:39.280492 waagent[1888]: pkts bytes target prot opt in out source destination Aug 5 22:22:39.280492 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:22:39.280492 waagent[1888]: pkts bytes target prot opt in out source destination Aug 5 22:22:39.280492 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:22:39.280492 waagent[1888]: pkts bytes target prot opt in out source destination Aug 5 22:22:39.280492 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 5 22:22:39.280492 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 5 22:22:39.280492 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 5 22:22:39.284069 waagent[1888]: 2024-08-05T22:22:39.284013Z INFO EnvHandler ExtHandler Current Firewall rules: Aug 5 22:22:39.284069 waagent[1888]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:22:39.284069 waagent[1888]: pkts bytes target prot opt in out source destination Aug 5 22:22:39.284069 waagent[1888]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:22:39.284069 waagent[1888]: pkts bytes target prot opt in out source destination Aug 5 22:22:39.284069 waagent[1888]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Aug 5 22:22:39.284069 waagent[1888]: pkts bytes target prot opt in out source destination Aug 5 22:22:39.284069 waagent[1888]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Aug 5 22:22:39.284069 waagent[1888]: 4 594 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Aug 5 22:22:39.284069 waagent[1888]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Aug 5 22:22:39.284441 waagent[1888]: 2024-08-05T22:22:39.284299Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Aug 5 22:22:39.968396 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 5 22:22:39.973712 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:40.216329 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:40.221059 (kubelet)[2121]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:40.590906 kubelet[2121]: E0805 22:22:40.590781 2121 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:40.593835 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:40.594052 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:50.716328 update_engine[1653]: I0805 22:22:50.716239 1653 update_attempter.cc:509] Updating boot flags... Aug 5 22:22:50.724920 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Aug 5 22:22:50.731795 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:22:50.792481 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2143) Aug 5 22:22:51.030428 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:22:51.037749 (kubelet)[2169]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:22:51.429380 kubelet[2169]: E0805 22:22:51.427801 2169 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:22:51.429682 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 36 scanned by (udev-worker) (2144) Aug 5 22:22:51.432984 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:22:51.434118 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:22:52.572495 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Aug 5 22:23:01.508253 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Aug 5 22:23:01.514687 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:01.633093 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:01.643761 (kubelet)[2218]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:23:01.685853 kubelet[2218]: E0805 22:23:01.685808 2218 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:23:01.688540 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:23:01.688753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:23:11.758366 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Aug 5 22:23:11.764689 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:11.852997 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:11.865768 (kubelet)[2234]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:23:12.415400 kubelet[2234]: E0805 22:23:12.415337 2234 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:23:12.417967 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:23:12.418165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:23:22.508551 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Aug 5 22:23:22.513670 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:22.645239 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:22.654761 (kubelet)[2250]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:23:22.697624 kubelet[2250]: E0805 22:23:22.697567 2250 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:23:22.700168 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:23:22.700399 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:23:28.446811 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 5 22:23:28.451726 systemd[1]: Started sshd@0-10.200.4.17:22-10.200.16.10:50930.service - OpenSSH per-connection server daemon (10.200.16.10:50930). Aug 5 22:23:29.054686 sshd[2259]: Accepted publickey for core from 10.200.16.10 port 50930 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:29.056414 sshd[2259]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:29.061915 systemd-logind[1650]: New session 3 of user core. Aug 5 22:23:29.072630 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 5 22:23:29.576696 systemd[1]: Started sshd@1-10.200.4.17:22-10.200.16.10:37374.service - OpenSSH per-connection server daemon (10.200.16.10:37374). Aug 5 22:23:30.165041 sshd[2264]: Accepted publickey for core from 10.200.16.10 port 37374 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:30.166533 sshd[2264]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:30.170366 systemd-logind[1650]: New session 4 of user core. Aug 5 22:23:30.179695 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 5 22:23:30.586952 sshd[2264]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:30.590245 systemd[1]: sshd@1-10.200.4.17:22-10.200.16.10:37374.service: Deactivated successfully. Aug 5 22:23:30.592653 systemd[1]: session-4.scope: Deactivated successfully. Aug 5 22:23:30.594358 systemd-logind[1650]: Session 4 logged out. Waiting for processes to exit. Aug 5 22:23:30.595533 systemd-logind[1650]: Removed session 4. Aug 5 22:23:30.692523 systemd[1]: Started sshd@2-10.200.4.17:22-10.200.16.10:37388.service - OpenSSH per-connection server daemon (10.200.16.10:37388). Aug 5 22:23:31.287025 sshd[2271]: Accepted publickey for core from 10.200.16.10 port 37388 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:31.288777 sshd[2271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:31.293384 systemd-logind[1650]: New session 5 of user core. Aug 5 22:23:31.302614 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 5 22:23:31.707335 sshd[2271]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:31.711742 systemd[1]: sshd@2-10.200.4.17:22-10.200.16.10:37388.service: Deactivated successfully. Aug 5 22:23:31.713548 systemd[1]: session-5.scope: Deactivated successfully. Aug 5 22:23:31.714308 systemd-logind[1650]: Session 5 logged out. Waiting for processes to exit. Aug 5 22:23:31.715191 systemd-logind[1650]: Removed session 5. Aug 5 22:23:31.814578 systemd[1]: Started sshd@3-10.200.4.17:22-10.200.16.10:37392.service - OpenSSH per-connection server daemon (10.200.16.10:37392). Aug 5 22:23:32.415152 sshd[2278]: Accepted publickey for core from 10.200.16.10 port 37392 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:32.423236 sshd[2278]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:32.427648 systemd-logind[1650]: New session 6 of user core. Aug 5 22:23:32.436856 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 5 22:23:32.746694 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Aug 5 22:23:32.753658 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:32.843900 sshd[2278]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:32.848154 systemd[1]: sshd@3-10.200.4.17:22-10.200.16.10:37392.service: Deactivated successfully. Aug 5 22:23:32.848280 systemd-logind[1650]: Session 6 logged out. Waiting for processes to exit. Aug 5 22:23:32.850893 systemd[1]: session-6.scope: Deactivated successfully. Aug 5 22:23:32.856234 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:32.857179 systemd-logind[1650]: Removed session 6. Aug 5 22:23:32.862001 (kubelet)[2291]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:23:32.903597 kubelet[2291]: E0805 22:23:32.903546 2291 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:23:32.906082 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:23:32.906289 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:23:32.950354 systemd[1]: Started sshd@4-10.200.4.17:22-10.200.16.10:37404.service - OpenSSH per-connection server daemon (10.200.16.10:37404). Aug 5 22:23:33.547025 sshd[2301]: Accepted publickey for core from 10.200.16.10 port 37404 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:33.548696 sshd[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:33.553293 systemd-logind[1650]: New session 7 of user core. Aug 5 22:23:33.562651 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 5 22:23:34.462260 sudo[2304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 5 22:23:34.462632 sudo[2304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:23:34.479766 sudo[2304]: pam_unix(sudo:session): session closed for user root Aug 5 22:23:34.576194 sshd[2301]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:34.580020 systemd[1]: sshd@4-10.200.4.17:22-10.200.16.10:37404.service: Deactivated successfully. Aug 5 22:23:34.582519 systemd[1]: session-7.scope: Deactivated successfully. Aug 5 22:23:34.584476 systemd-logind[1650]: Session 7 logged out. Waiting for processes to exit. Aug 5 22:23:34.585414 systemd-logind[1650]: Removed session 7. Aug 5 22:23:34.681478 systemd[1]: Started sshd@5-10.200.4.17:22-10.200.16.10:37414.service - OpenSSH per-connection server daemon (10.200.16.10:37414). Aug 5 22:23:35.276341 sshd[2309]: Accepted publickey for core from 10.200.16.10 port 37414 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:35.277852 sshd[2309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:35.282293 systemd-logind[1650]: New session 8 of user core. Aug 5 22:23:35.288594 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 5 22:23:35.605473 sudo[2313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 5 22:23:35.605821 sudo[2313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:23:35.614591 sudo[2313]: pam_unix(sudo:session): session closed for user root Aug 5 22:23:35.619482 sudo[2312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Aug 5 22:23:35.619845 sudo[2312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:23:35.634768 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Aug 5 22:23:35.636345 auditctl[2316]: No rules Aug 5 22:23:35.636713 systemd[1]: audit-rules.service: Deactivated successfully. Aug 5 22:23:35.636900 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Aug 5 22:23:35.639417 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Aug 5 22:23:35.664810 augenrules[2334]: No rules Aug 5 22:23:35.666116 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Aug 5 22:23:35.667339 sudo[2312]: pam_unix(sudo:session): session closed for user root Aug 5 22:23:35.763106 sshd[2309]: pam_unix(sshd:session): session closed for user core Aug 5 22:23:35.766275 systemd[1]: sshd@5-10.200.4.17:22-10.200.16.10:37414.service: Deactivated successfully. Aug 5 22:23:35.768202 systemd[1]: session-8.scope: Deactivated successfully. Aug 5 22:23:35.769718 systemd-logind[1650]: Session 8 logged out. Waiting for processes to exit. Aug 5 22:23:35.770829 systemd-logind[1650]: Removed session 8. Aug 5 22:23:35.869690 systemd[1]: Started sshd@6-10.200.4.17:22-10.200.16.10:37422.service - OpenSSH per-connection server daemon (10.200.16.10:37422). Aug 5 22:23:36.467470 sshd[2342]: Accepted publickey for core from 10.200.16.10 port 37422 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:23:36.468910 sshd[2342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:23:36.473622 systemd-logind[1650]: New session 9 of user core. Aug 5 22:23:36.481622 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 5 22:23:36.799529 sudo[2345]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 5 22:23:36.799863 sudo[2345]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Aug 5 22:23:37.062763 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 5 22:23:37.065264 (dockerd)[2355]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 5 22:23:37.579901 dockerd[2355]: time="2024-08-05T22:23:37.579841900Z" level=info msg="Starting up" Aug 5 22:23:37.811008 dockerd[2355]: time="2024-08-05T22:23:37.810954120Z" level=info msg="Loading containers: start." Aug 5 22:23:37.931515 kernel: Initializing XFRM netlink socket Aug 5 22:23:38.003753 systemd-networkd[1547]: docker0: Link UP Aug 5 22:23:38.029024 dockerd[2355]: time="2024-08-05T22:23:38.028981114Z" level=info msg="Loading containers: done." Aug 5 22:23:38.140485 dockerd[2355]: time="2024-08-05T22:23:38.140430284Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 5 22:23:38.140704 dockerd[2355]: time="2024-08-05T22:23:38.140676186Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Aug 5 22:23:38.140815 dockerd[2355]: time="2024-08-05T22:23:38.140794087Z" level=info msg="Daemon has completed initialization" Aug 5 22:23:38.187025 dockerd[2355]: time="2024-08-05T22:23:38.186964031Z" level=info msg="API listen on /run/docker.sock" Aug 5 22:23:38.188512 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 5 22:23:39.509955 containerd[1681]: time="2024-08-05T22:23:39.509840335Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\"" Aug 5 22:23:40.287288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1883458880.mount: Deactivated successfully. Aug 5 22:23:42.556466 containerd[1681]: time="2024-08-05T22:23:42.556401580Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:42.558277 containerd[1681]: time="2024-08-05T22:23:42.558216902Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.28.12: active requests=0, bytes read=34527325" Aug 5 22:23:42.562580 containerd[1681]: time="2024-08-05T22:23:42.562525552Z" level=info msg="ImageCreate event name:\"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:42.567491 containerd[1681]: time="2024-08-05T22:23:42.567411810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:42.569161 containerd[1681]: time="2024-08-05T22:23:42.568446822Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.28.12\" with image id \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\", repo tag \"registry.k8s.io/kube-apiserver:v1.28.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:ac3b6876d95fe7b7691e69f2161a5466adbe9d72d44f342d595674321ce16d23\", size \"34524117\" in 3.058564786s" Aug 5 22:23:42.569161 containerd[1681]: time="2024-08-05T22:23:42.568508523Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.12\" returns image reference \"sha256:e273eb47a05653f4156904acde3c077c9d6aa606e8f8326423a0cd229dec41ba\"" Aug 5 22:23:42.589641 containerd[1681]: time="2024-08-05T22:23:42.589598672Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\"" Aug 5 22:23:43.008250 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Aug 5 22:23:43.014892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:43.137851 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:43.149768 (kubelet)[2550]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:23:43.190967 kubelet[2550]: E0805 22:23:43.190872 2550 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:23:43.193425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:23:43.193651 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:23:46.437541 containerd[1681]: time="2024-08-05T22:23:46.437486432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:46.440163 containerd[1681]: time="2024-08-05T22:23:46.440103563Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.28.12: active requests=0, bytes read=31847075" Aug 5 22:23:46.443227 containerd[1681]: time="2024-08-05T22:23:46.443172299Z" level=info msg="ImageCreate event name:\"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:46.449214 containerd[1681]: time="2024-08-05T22:23:46.449159770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:46.450312 containerd[1681]: time="2024-08-05T22:23:46.450163782Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.28.12\" with image id \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\", repo tag \"registry.k8s.io/kube-controller-manager:v1.28.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:996c6259e4405ab79083fbb52bcf53003691a50b579862bf29b3abaa468460db\", size \"33397013\" in 3.86052121s" Aug 5 22:23:46.450312 containerd[1681]: time="2024-08-05T22:23:46.450202682Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.12\" returns image reference \"sha256:e7dd86d2e68b50ae5c49b982edd7e69404b46696a21dd4c9de65b213e9468512\"" Aug 5 22:23:46.471911 containerd[1681]: time="2024-08-05T22:23:46.471873738Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\"" Aug 5 22:23:48.036177 containerd[1681]: time="2024-08-05T22:23:48.036121478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:48.037993 containerd[1681]: time="2024-08-05T22:23:48.037934499Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.28.12: active requests=0, bytes read=17097303" Aug 5 22:23:48.041444 containerd[1681]: time="2024-08-05T22:23:48.041389940Z" level=info msg="ImageCreate event name:\"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:48.046218 containerd[1681]: time="2024-08-05T22:23:48.046159196Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:48.047304 containerd[1681]: time="2024-08-05T22:23:48.047166908Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.28.12\" with image id \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\", repo tag \"registry.k8s.io/kube-scheduler:v1.28.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:d93a3b5961248820beb5ec6dfb0320d12c0dba82fc48693d20d345754883551c\", size \"18647259\" in 1.57525667s" Aug 5 22:23:48.047304 containerd[1681]: time="2024-08-05T22:23:48.047207008Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.12\" returns image reference \"sha256:ee5fb2190e0207cd765596f1cd7c9a492c9cfded10710d45ef19f23e70d3b4a9\"" Aug 5 22:23:48.068547 containerd[1681]: time="2024-08-05T22:23:48.068505259Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\"" Aug 5 22:23:49.584957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2853804846.mount: Deactivated successfully. Aug 5 22:23:50.133710 containerd[1681]: time="2024-08-05T22:23:50.133654022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.28.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:50.135522 containerd[1681]: time="2024-08-05T22:23:50.135461742Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.28.12: active requests=0, bytes read=28303777" Aug 5 22:23:50.139253 containerd[1681]: time="2024-08-05T22:23:50.139189384Z" level=info msg="ImageCreate event name:\"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:50.145920 containerd[1681]: time="2024-08-05T22:23:50.145878858Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:50.146969 containerd[1681]: time="2024-08-05T22:23:50.146662667Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.28.12\" with image id \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\", repo tag \"registry.k8s.io/kube-proxy:v1.28.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:7dd7829fa889ac805a0b1047eba04599fa5006bdbcb5cb9c8d14e1dc8910488b\", size \"28302788\" in 2.078110507s" Aug 5 22:23:50.146969 containerd[1681]: time="2024-08-05T22:23:50.146716568Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.12\" returns image reference \"sha256:1610963ec6edeaf744dc6bc6475bb85db4736faef7394a1ad6f0ccb9d30d2ab3\"" Aug 5 22:23:50.168001 containerd[1681]: time="2024-08-05T22:23:50.167974105Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Aug 5 22:23:50.726994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4013372159.mount: Deactivated successfully. Aug 5 22:23:50.745097 containerd[1681]: time="2024-08-05T22:23:50.745043139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:50.748064 containerd[1681]: time="2024-08-05T22:23:50.748009172Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Aug 5 22:23:50.751028 containerd[1681]: time="2024-08-05T22:23:50.750972305Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:50.756359 containerd[1681]: time="2024-08-05T22:23:50.756309665Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:50.757164 containerd[1681]: time="2024-08-05T22:23:50.757034973Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 589.026068ms" Aug 5 22:23:50.757164 containerd[1681]: time="2024-08-05T22:23:50.757071573Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Aug 5 22:23:50.778360 containerd[1681]: time="2024-08-05T22:23:50.778331410Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Aug 5 22:23:51.590846 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount223885944.mount: Deactivated successfully. Aug 5 22:23:53.258115 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 10. Aug 5 22:23:53.263685 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:23:53.354508 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:23:53.366776 (kubelet)[2637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 5 22:23:53.412567 kubelet[2637]: E0805 22:23:53.412507 2637 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 5 22:23:53.415176 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 5 22:23:53.415391 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 5 22:23:57.158890 containerd[1681]: time="2024-08-05T22:23:57.158836180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:57.161497 containerd[1681]: time="2024-08-05T22:23:57.161412510Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Aug 5 22:23:57.163835 containerd[1681]: time="2024-08-05T22:23:57.163773937Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:57.168369 containerd[1681]: time="2024-08-05T22:23:57.168316689Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:57.169935 containerd[1681]: time="2024-08-05T22:23:57.169514603Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 6.391150893s" Aug 5 22:23:57.169935 containerd[1681]: time="2024-08-05T22:23:57.169558003Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Aug 5 22:23:57.190380 containerd[1681]: time="2024-08-05T22:23:57.190350242Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Aug 5 22:23:57.814119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2508751200.mount: Deactivated successfully. Aug 5 22:23:58.554679 containerd[1681]: time="2024-08-05T22:23:58.554611909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:58.557423 containerd[1681]: time="2024-08-05T22:23:58.557363041Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.10.1: active requests=0, bytes read=16191757" Aug 5 22:23:58.560641 containerd[1681]: time="2024-08-05T22:23:58.560582078Z" level=info msg="ImageCreate event name:\"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:58.565169 containerd[1681]: time="2024-08-05T22:23:58.565116030Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:23:58.566319 containerd[1681]: time="2024-08-05T22:23:58.565845238Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.10.1\" with image id \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\", repo tag \"registry.k8s.io/coredns/coredns:v1.10.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e\", size \"16190758\" in 1.375459396s" Aug 5 22:23:58.566319 containerd[1681]: time="2024-08-05T22:23:58.565885738Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Aug 5 22:24:00.695128 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:24:00.704717 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:24:00.730877 systemd[1]: Reloading requested from client PID 2736 ('systemctl') (unit session-9.scope)... Aug 5 22:24:00.730991 systemd[1]: Reloading... Aug 5 22:24:00.832477 zram_generator::config[2773]: No configuration found. Aug 5 22:24:00.952847 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:24:01.033990 systemd[1]: Reloading finished in 302 ms. Aug 5 22:24:01.073640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 5 22:24:01.073737 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 5 22:24:01.073998 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:24:01.081747 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:24:06.046649 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:24:06.046822 (kubelet)[2840]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:24:06.091643 kubelet[2840]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:24:06.091643 kubelet[2840]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:24:06.091643 kubelet[2840]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:24:06.092098 kubelet[2840]: I0805 22:24:06.091697 2840 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:24:07.352951 kubelet[2840]: I0805 22:24:07.352906 2840 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:24:07.352951 kubelet[2840]: I0805 22:24:07.352938 2840 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:24:07.353511 kubelet[2840]: I0805 22:24:07.353236 2840 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:24:07.368036 kubelet[2840]: I0805 22:24:07.368003 2840 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:24:07.368297 kubelet[2840]: E0805 22:24:07.368260 2840 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.381866 kubelet[2840]: I0805 22:24:07.381835 2840 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:24:07.383030 kubelet[2840]: I0805 22:24:07.383002 2840 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:24:07.383218 kubelet[2840]: I0805 22:24:07.383196 2840 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:24:07.383721 kubelet[2840]: I0805 22:24:07.383693 2840 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:24:07.383721 kubelet[2840]: I0805 22:24:07.383721 2840 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:24:07.384438 kubelet[2840]: I0805 22:24:07.384411 2840 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:24:07.385788 kubelet[2840]: I0805 22:24:07.385765 2840 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:24:07.385864 kubelet[2840]: I0805 22:24:07.385801 2840 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:24:07.385864 kubelet[2840]: I0805 22:24:07.385838 2840 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:24:07.385864 kubelet[2840]: I0805 22:24:07.385856 2840 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:24:07.387958 kubelet[2840]: W0805 22:24:07.387371 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.387958 kubelet[2840]: E0805 22:24:07.387465 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.387958 kubelet[2840]: W0805 22:24:07.387546 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-bfd2eb4520&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.387958 kubelet[2840]: E0805 22:24:07.387585 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-bfd2eb4520&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.387958 kubelet[2840]: I0805 22:24:07.387679 2840 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:24:07.390327 kubelet[2840]: W0805 22:24:07.389436 2840 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 5 22:24:07.390327 kubelet[2840]: I0805 22:24:07.390180 2840 server.go:1232] "Started kubelet" Aug 5 22:24:07.393658 kubelet[2840]: I0805 22:24:07.392470 2840 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:24:07.393658 kubelet[2840]: I0805 22:24:07.392499 2840 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:24:07.393658 kubelet[2840]: I0805 22:24:07.392801 2840 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:24:07.393658 kubelet[2840]: E0805 22:24:07.392989 2840 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.1.0-a-bfd2eb4520.17e8f55f5c66dc44", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.1.0-a-bfd2eb4520", UID:"ci-4012.1.0-a-bfd2eb4520", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.1.0-a-bfd2eb4520"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 24, 7, 390157892, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 24, 7, 390157892, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.1.0-a-bfd2eb4520"}': 'Post "https://10.200.4.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.4.17:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:24:07.393658 kubelet[2840]: I0805 22:24:07.393476 2840 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:24:07.394976 kubelet[2840]: I0805 22:24:07.394958 2840 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:24:07.395141 kubelet[2840]: E0805 22:24:07.395123 2840 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:24:07.395360 kubelet[2840]: E0805 22:24:07.395153 2840 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:24:07.398384 kubelet[2840]: E0805 22:24:07.398347 2840 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-4012.1.0-a-bfd2eb4520\" not found" Aug 5 22:24:07.398477 kubelet[2840]: I0805 22:24:07.398389 2840 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:24:07.398541 kubelet[2840]: I0805 22:24:07.398501 2840 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:24:07.398579 kubelet[2840]: I0805 22:24:07.398560 2840 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:24:07.398943 kubelet[2840]: W0805 22:24:07.398902 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.399020 kubelet[2840]: E0805 22:24:07.398955 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:07.399473 kubelet[2840]: E0805 22:24:07.399407 2840 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-bfd2eb4520?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="200ms" Aug 5 22:24:07.446642 kubelet[2840]: I0805 22:24:07.446610 2840 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:24:07.446642 kubelet[2840]: I0805 22:24:07.446635 2840 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:24:07.446817 kubelet[2840]: I0805 22:24:07.446669 2840 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:24:07.501150 kubelet[2840]: I0805 22:24:07.501118 2840 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:07.501549 kubelet[2840]: E0805 22:24:07.501515 2840 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:07.600247 kubelet[2840]: E0805 22:24:07.600210 2840 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-bfd2eb4520?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="400ms" Aug 5 22:24:07.703785 kubelet[2840]: I0805 22:24:07.703662 2840 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:07.705599 kubelet[2840]: E0805 22:24:07.704088 2840 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:08.001411 kubelet[2840]: E0805 22:24:08.001290 2840 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-bfd2eb4520?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="800ms" Aug 5 22:24:08.107151 kubelet[2840]: I0805 22:24:08.107097 2840 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:08.107570 kubelet[2840]: E0805 22:24:08.107542 2840 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:08.212650 kubelet[2840]: W0805 22:24:08.212590 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-bfd2eb4520&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.212650 kubelet[2840]: E0805 22:24:08.212663 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.4.17:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4012.1.0-a-bfd2eb4520&limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.571850 kubelet[2840]: W0805 22:24:08.571787 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.571850 kubelet[2840]: E0805 22:24:08.571856 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.4.17:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.593378 kubelet[2840]: E0805 22:24:08.593280 2840 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-4012.1.0-a-bfd2eb4520.17e8f55f5c66dc44", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-4012.1.0-a-bfd2eb4520", UID:"ci-4012.1.0-a-bfd2eb4520", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-4012.1.0-a-bfd2eb4520"}, FirstTimestamp:time.Date(2024, time.August, 5, 22, 24, 7, 390157892, time.Local), LastTimestamp:time.Date(2024, time.August, 5, 22, 24, 7, 390157892, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-4012.1.0-a-bfd2eb4520"}': 'Post "https://10.200.4.17:6443/api/v1/namespaces/default/events": dial tcp 10.200.4.17:6443: connect: connection refused'(may retry after sleeping) Aug 5 22:24:08.628950 kubelet[2840]: I0805 22:24:08.628869 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:24:08.829945 kubelet[2840]: I0805 22:24:08.630786 2840 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:24:08.829945 kubelet[2840]: I0805 22:24:08.630833 2840 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:24:08.829945 kubelet[2840]: I0805 22:24:08.630864 2840 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:24:08.829945 kubelet[2840]: E0805 22:24:08.631836 2840 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:24:08.829945 kubelet[2840]: W0805 22:24:08.632073 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.829945 kubelet[2840]: E0805 22:24:08.632129 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.829945 kubelet[2840]: E0805 22:24:08.732440 2840 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:24:08.829945 kubelet[2840]: E0805 22:24:08.802547 2840 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-bfd2eb4520?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="1.6s" Aug 5 22:24:08.874403 kubelet[2840]: I0805 22:24:08.874349 2840 policy_none.go:49] "None policy: Start" Aug 5 22:24:08.875592 kubelet[2840]: I0805 22:24:08.875561 2840 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:24:08.875698 kubelet[2840]: I0805 22:24:08.875606 2840 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:24:08.909648 kubelet[2840]: I0805 22:24:08.909597 2840 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:08.909965 kubelet[2840]: E0805 22:24:08.909945 2840 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:08.914590 kubelet[2840]: W0805 22:24:08.914544 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.914729 kubelet[2840]: E0805 22:24:08.914598 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.4.17:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:08.932679 kubelet[2840]: E0805 22:24:08.932631 2840 kubelet.go:2327] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Aug 5 22:24:09.071182 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 5 22:24:09.085601 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 5 22:24:09.089569 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 5 22:24:09.095818 kubelet[2840]: I0805 22:24:09.095275 2840 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:24:09.095818 kubelet[2840]: I0805 22:24:09.095692 2840 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:24:09.096619 kubelet[2840]: E0805 22:24:09.096509 2840 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4012.1.0-a-bfd2eb4520\" not found" Aug 5 22:24:09.333864 kubelet[2840]: I0805 22:24:09.333805 2840 topology_manager.go:215] "Topology Admit Handler" podUID="12952a4050cc6a99afa4a56f2f22cadf" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.335972 kubelet[2840]: I0805 22:24:09.335770 2840 topology_manager.go:215] "Topology Admit Handler" podUID="e4606093fc86ea680d78aa5f501c57b3" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.337670 kubelet[2840]: I0805 22:24:09.337642 2840 topology_manager.go:215] "Topology Admit Handler" podUID="2dce664c8d360756d63b4b8eba3e26a8" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.344557 systemd[1]: Created slice kubepods-burstable-pod12952a4050cc6a99afa4a56f2f22cadf.slice - libcontainer container kubepods-burstable-pod12952a4050cc6a99afa4a56f2f22cadf.slice. Aug 5 22:24:09.368377 systemd[1]: Created slice kubepods-burstable-pod2dce664c8d360756d63b4b8eba3e26a8.slice - libcontainer container kubepods-burstable-pod2dce664c8d360756d63b4b8eba3e26a8.slice. Aug 5 22:24:09.383346 systemd[1]: Created slice kubepods-burstable-pode4606093fc86ea680d78aa5f501c57b3.slice - libcontainer container kubepods-burstable-pode4606093fc86ea680d78aa5f501c57b3.slice. Aug 5 22:24:09.399962 kubelet[2840]: E0805 22:24:09.399940 2840 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.4.17:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:09.410199 kubelet[2840]: I0805 22:24:09.410166 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410297 kubelet[2840]: I0805 22:24:09.410218 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12952a4050cc6a99afa4a56f2f22cadf-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" (UID: \"12952a4050cc6a99afa4a56f2f22cadf\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410297 kubelet[2840]: I0805 22:24:09.410250 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12952a4050cc6a99afa4a56f2f22cadf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" (UID: \"12952a4050cc6a99afa4a56f2f22cadf\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410297 kubelet[2840]: I0805 22:24:09.410277 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410429 kubelet[2840]: I0805 22:24:09.410306 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410429 kubelet[2840]: I0805 22:24:09.410338 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410429 kubelet[2840]: I0805 22:24:09.410367 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410429 kubelet[2840]: I0805 22:24:09.410395 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12952a4050cc6a99afa4a56f2f22cadf-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" (UID: \"12952a4050cc6a99afa4a56f2f22cadf\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.410429 kubelet[2840]: I0805 22:24:09.410427 2840 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2dce664c8d360756d63b4b8eba3e26a8-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-a-bfd2eb4520\" (UID: \"2dce664c8d360756d63b4b8eba3e26a8\") " pod="kube-system/kube-scheduler-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:09.467796 kubelet[2840]: W0805 22:24:09.467751 2840 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:09.467796 kubelet[2840]: E0805 22:24:09.467802 2840 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.4.17:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.17:6443: connect: connection refused Aug 5 22:24:09.665495 containerd[1681]: time="2024-08-05T22:24:09.665321335Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-a-bfd2eb4520,Uid:12952a4050cc6a99afa4a56f2f22cadf,Namespace:kube-system,Attempt:0,}" Aug 5 22:24:09.682486 containerd[1681]: time="2024-08-05T22:24:09.682172829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-a-bfd2eb4520,Uid:2dce664c8d360756d63b4b8eba3e26a8,Namespace:kube-system,Attempt:0,}" Aug 5 22:24:09.686187 containerd[1681]: time="2024-08-05T22:24:09.685923572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-a-bfd2eb4520,Uid:e4606093fc86ea680d78aa5f501c57b3,Namespace:kube-system,Attempt:0,}" Aug 5 22:24:10.299737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2651085254.mount: Deactivated successfully. Aug 5 22:24:10.327003 containerd[1681]: time="2024-08-05T22:24:10.326952667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:24:10.328622 containerd[1681]: time="2024-08-05T22:24:10.328565085Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Aug 5 22:24:10.331984 containerd[1681]: time="2024-08-05T22:24:10.331946224Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:24:10.335817 containerd[1681]: time="2024-08-05T22:24:10.335780868Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:24:10.339516 containerd[1681]: time="2024-08-05T22:24:10.339444611Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:24:10.343807 containerd[1681]: time="2024-08-05T22:24:10.343739860Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:24:10.349019 containerd[1681]: time="2024-08-05T22:24:10.348734518Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Aug 5 22:24:10.353860 containerd[1681]: time="2024-08-05T22:24:10.353827076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 5 22:24:10.354871 containerd[1681]: time="2024-08-05T22:24:10.354831888Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 672.552358ms" Aug 5 22:24:10.359924 containerd[1681]: time="2024-08-05T22:24:10.359877646Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 694.42911ms" Aug 5 22:24:10.369806 containerd[1681]: time="2024-08-05T22:24:10.369526958Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 683.505484ms" Aug 5 22:24:10.403921 kubelet[2840]: E0805 22:24:10.403884 2840 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.17:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4012.1.0-a-bfd2eb4520?timeout=10s\": dial tcp 10.200.4.17:6443: connect: connection refused" interval="3.2s" Aug 5 22:24:10.512676 kubelet[2840]: I0805 22:24:10.512637 2840 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:10.513438 kubelet[2840]: E0805 22:24:10.512992 2840 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.4.17:6443/api/v1/nodes\": dial tcp 10.200.4.17:6443: connect: connection refused" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766616938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766663738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766690339Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766711539Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766589438Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766653638Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766680639Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:10.766798 containerd[1681]: time="2024-08-05T22:24:10.766702039Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:10.769276 containerd[1681]: time="2024-08-05T22:24:10.768230357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:10.770584 containerd[1681]: time="2024-08-05T22:24:10.770363381Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:10.770584 containerd[1681]: time="2024-08-05T22:24:10.770412282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:10.770584 containerd[1681]: time="2024-08-05T22:24:10.770427282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:10.801620 systemd[1]: Started cri-containerd-2ba0f045d3d38ed647e8670d60d40d0e3b6ff4c823dfd01bcb17d7c4173f3f6e.scope - libcontainer container 2ba0f045d3d38ed647e8670d60d40d0e3b6ff4c823dfd01bcb17d7c4173f3f6e. Aug 5 22:24:10.807925 systemd[1]: Started cri-containerd-8bb0e55c89cf388c3773424034c8091929388f528a88e627f79756c6be3a7e92.scope - libcontainer container 8bb0e55c89cf388c3773424034c8091929388f528a88e627f79756c6be3a7e92. Aug 5 22:24:10.810505 systemd[1]: Started cri-containerd-d480c57324d27fe63f8379685529e405786798a5b3fa60d5d4e5929e5c09d5f4.scope - libcontainer container d480c57324d27fe63f8379685529e405786798a5b3fa60d5d4e5929e5c09d5f4. Aug 5 22:24:10.880077 containerd[1681]: time="2024-08-05T22:24:10.880022646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4012.1.0-a-bfd2eb4520,Uid:2dce664c8d360756d63b4b8eba3e26a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"d480c57324d27fe63f8379685529e405786798a5b3fa60d5d4e5929e5c09d5f4\"" Aug 5 22:24:10.889509 containerd[1681]: time="2024-08-05T22:24:10.889442255Z" level=info msg="CreateContainer within sandbox \"d480c57324d27fe63f8379685529e405786798a5b3fa60d5d4e5929e5c09d5f4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 5 22:24:10.897330 containerd[1681]: time="2024-08-05T22:24:10.897275345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4012.1.0-a-bfd2eb4520,Uid:12952a4050cc6a99afa4a56f2f22cadf,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ba0f045d3d38ed647e8670d60d40d0e3b6ff4c823dfd01bcb17d7c4173f3f6e\"" Aug 5 22:24:10.902477 containerd[1681]: time="2024-08-05T22:24:10.902329503Z" level=info msg="CreateContainer within sandbox \"2ba0f045d3d38ed647e8670d60d40d0e3b6ff4c823dfd01bcb17d7c4173f3f6e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 5 22:24:10.903789 containerd[1681]: time="2024-08-05T22:24:10.903758420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4012.1.0-a-bfd2eb4520,Uid:e4606093fc86ea680d78aa5f501c57b3,Namespace:kube-system,Attempt:0,} returns sandbox id \"8bb0e55c89cf388c3773424034c8091929388f528a88e627f79756c6be3a7e92\"" Aug 5 22:24:10.907766 containerd[1681]: time="2024-08-05T22:24:10.907729266Z" level=info msg="CreateContainer within sandbox \"8bb0e55c89cf388c3773424034c8091929388f528a88e627f79756c6be3a7e92\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 5 22:24:10.944121 containerd[1681]: time="2024-08-05T22:24:10.944076685Z" level=info msg="CreateContainer within sandbox \"d480c57324d27fe63f8379685529e405786798a5b3fa60d5d4e5929e5c09d5f4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1ad28d61732afba7c8f353b2f66ff361855e9fa6354fc0b62d95e23090d70903\"" Aug 5 22:24:10.944885 containerd[1681]: time="2024-08-05T22:24:10.944853894Z" level=info msg="StartContainer for \"1ad28d61732afba7c8f353b2f66ff361855e9fa6354fc0b62d95e23090d70903\"" Aug 5 22:24:10.959638 containerd[1681]: time="2024-08-05T22:24:10.959594464Z" level=info msg="CreateContainer within sandbox \"2ba0f045d3d38ed647e8670d60d40d0e3b6ff4c823dfd01bcb17d7c4173f3f6e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"8c64cdb000957983909ea75295904b9294524f2c8ad5f35e28cfe281a2a7229a\"" Aug 5 22:24:10.960288 containerd[1681]: time="2024-08-05T22:24:10.960255971Z" level=info msg="StartContainer for \"8c64cdb000957983909ea75295904b9294524f2c8ad5f35e28cfe281a2a7229a\"" Aug 5 22:24:10.966620 containerd[1681]: time="2024-08-05T22:24:10.966534644Z" level=info msg="CreateContainer within sandbox \"8bb0e55c89cf388c3773424034c8091929388f528a88e627f79756c6be3a7e92\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"199f598ae71bb19fa27019957b1d0346bf41baf332384fd5ac0d922e54ce26ae\"" Aug 5 22:24:10.968474 containerd[1681]: time="2024-08-05T22:24:10.967493155Z" level=info msg="StartContainer for \"199f598ae71bb19fa27019957b1d0346bf41baf332384fd5ac0d922e54ce26ae\"" Aug 5 22:24:10.983809 systemd[1]: Started cri-containerd-1ad28d61732afba7c8f353b2f66ff361855e9fa6354fc0b62d95e23090d70903.scope - libcontainer container 1ad28d61732afba7c8f353b2f66ff361855e9fa6354fc0b62d95e23090d70903. Aug 5 22:24:11.011634 systemd[1]: Started cri-containerd-8c64cdb000957983909ea75295904b9294524f2c8ad5f35e28cfe281a2a7229a.scope - libcontainer container 8c64cdb000957983909ea75295904b9294524f2c8ad5f35e28cfe281a2a7229a. Aug 5 22:24:11.023520 systemd[1]: Started cri-containerd-199f598ae71bb19fa27019957b1d0346bf41baf332384fd5ac0d922e54ce26ae.scope - libcontainer container 199f598ae71bb19fa27019957b1d0346bf41baf332384fd5ac0d922e54ce26ae. Aug 5 22:24:11.081682 containerd[1681]: time="2024-08-05T22:24:11.081637872Z" level=info msg="StartContainer for \"1ad28d61732afba7c8f353b2f66ff361855e9fa6354fc0b62d95e23090d70903\" returns successfully" Aug 5 22:24:11.113474 containerd[1681]: time="2024-08-05T22:24:11.111559017Z" level=info msg="StartContainer for \"8c64cdb000957983909ea75295904b9294524f2c8ad5f35e28cfe281a2a7229a\" returns successfully" Aug 5 22:24:11.125015 containerd[1681]: time="2024-08-05T22:24:11.124972471Z" level=info msg="StartContainer for \"199f598ae71bb19fa27019957b1d0346bf41baf332384fd5ac0d922e54ce26ae\" returns successfully" Aug 5 22:24:13.607991 kubelet[2840]: E0805 22:24:13.607922 2840 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4012.1.0-a-bfd2eb4520\" not found" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:13.715592 kubelet[2840]: I0805 22:24:13.715563 2840 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:13.721008 kubelet[2840]: I0805 22:24:13.720975 2840 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:14.013661 kubelet[2840]: E0805 22:24:14.013606 2840 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:14.390437 kubelet[2840]: I0805 22:24:14.390280 2840 apiserver.go:52] "Watching apiserver" Aug 5 22:24:14.399667 kubelet[2840]: I0805 22:24:14.399620 2840 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:24:16.164179 systemd[1]: Reloading requested from client PID 3116 ('systemctl') (unit session-9.scope)... Aug 5 22:24:16.164196 systemd[1]: Reloading... Aug 5 22:24:16.265481 zram_generator::config[3150]: No configuration found. Aug 5 22:24:16.402026 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 5 22:24:16.499217 systemd[1]: Reloading finished in 334 ms. Aug 5 22:24:16.541919 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:24:16.542348 kubelet[2840]: I0805 22:24:16.541904 2840 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:24:16.560955 systemd[1]: kubelet.service: Deactivated successfully. Aug 5 22:24:16.561278 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:24:16.567757 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 5 22:24:16.663938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 5 22:24:16.674794 (kubelet)[3220]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 5 22:24:16.717818 kubelet[3220]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:24:16.717818 kubelet[3220]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 5 22:24:16.717818 kubelet[3220]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 5 22:24:16.718281 kubelet[3220]: I0805 22:24:16.717871 3220 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 5 22:24:16.722100 kubelet[3220]: I0805 22:24:16.722074 3220 server.go:467] "Kubelet version" kubeletVersion="v1.28.7" Aug 5 22:24:16.722100 kubelet[3220]: I0805 22:24:16.722096 3220 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 5 22:24:16.722313 kubelet[3220]: I0805 22:24:16.722294 3220 server.go:895] "Client rotation is on, will bootstrap in background" Aug 5 22:24:16.723569 kubelet[3220]: I0805 22:24:16.723548 3220 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 5 22:24:16.725244 kubelet[3220]: I0805 22:24:16.724468 3220 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 5 22:24:16.733482 kubelet[3220]: I0805 22:24:16.732766 3220 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 5 22:24:16.733482 kubelet[3220]: I0805 22:24:16.732992 3220 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 5 22:24:16.733482 kubelet[3220]: I0805 22:24:16.733135 3220 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Aug 5 22:24:16.733482 kubelet[3220]: I0805 22:24:16.733154 3220 topology_manager.go:138] "Creating topology manager with none policy" Aug 5 22:24:16.733482 kubelet[3220]: I0805 22:24:16.733162 3220 container_manager_linux.go:301] "Creating device plugin manager" Aug 5 22:24:16.733482 kubelet[3220]: I0805 22:24:16.733201 3220 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:24:16.733884 kubelet[3220]: I0805 22:24:16.733292 3220 kubelet.go:393] "Attempting to sync node with API server" Aug 5 22:24:16.733884 kubelet[3220]: I0805 22:24:16.733306 3220 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 5 22:24:16.733884 kubelet[3220]: I0805 22:24:16.733329 3220 kubelet.go:309] "Adding apiserver pod source" Aug 5 22:24:16.733884 kubelet[3220]: I0805 22:24:16.733343 3220 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 5 22:24:16.735771 kubelet[3220]: I0805 22:24:16.735750 3220 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="v1.7.18" apiVersion="v1" Aug 5 22:24:16.736323 kubelet[3220]: I0805 22:24:16.736302 3220 server.go:1232] "Started kubelet" Aug 5 22:24:16.738426 kubelet[3220]: I0805 22:24:16.738403 3220 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 5 22:24:16.747556 kubelet[3220]: I0805 22:24:16.747343 3220 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Aug 5 22:24:16.748365 kubelet[3220]: I0805 22:24:16.748345 3220 server.go:462] "Adding debug handlers to kubelet server" Aug 5 22:24:16.751124 kubelet[3220]: I0805 22:24:16.749790 3220 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Aug 5 22:24:16.751124 kubelet[3220]: I0805 22:24:16.749992 3220 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 5 22:24:16.754557 kubelet[3220]: I0805 22:24:16.751934 3220 volume_manager.go:291] "Starting Kubelet Volume Manager" Aug 5 22:24:16.754557 kubelet[3220]: I0805 22:24:16.753905 3220 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Aug 5 22:24:16.754557 kubelet[3220]: I0805 22:24:16.754047 3220 reconciler_new.go:29] "Reconciler: start to sync state" Aug 5 22:24:16.757133 kubelet[3220]: I0805 22:24:16.756040 3220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 5 22:24:16.757512 kubelet[3220]: I0805 22:24:16.757495 3220 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 5 22:24:16.757590 kubelet[3220]: I0805 22:24:16.757517 3220 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 5 22:24:16.757590 kubelet[3220]: I0805 22:24:16.757562 3220 kubelet.go:2303] "Starting kubelet main sync loop" Aug 5 22:24:16.757692 kubelet[3220]: E0805 22:24:16.757677 3220 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 5 22:24:16.761521 kubelet[3220]: E0805 22:24:16.761496 3220 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Aug 5 22:24:16.761647 kubelet[3220]: E0805 22:24:16.761637 3220 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 5 22:24:16.835401 kubelet[3220]: I0805 22:24:16.835364 3220 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 5 22:24:16.835401 kubelet[3220]: I0805 22:24:16.835387 3220 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 5 22:24:16.835401 kubelet[3220]: I0805 22:24:16.835407 3220 state_mem.go:36] "Initialized new in-memory state store" Aug 5 22:24:16.835675 kubelet[3220]: I0805 22:24:16.835621 3220 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 5 22:24:16.835675 kubelet[3220]: I0805 22:24:16.835648 3220 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 5 22:24:16.835675 kubelet[3220]: I0805 22:24:16.835658 3220 policy_none.go:49] "None policy: Start" Aug 5 22:24:16.836417 kubelet[3220]: I0805 22:24:16.836373 3220 memory_manager.go:169] "Starting memorymanager" policy="None" Aug 5 22:24:16.836417 kubelet[3220]: I0805 22:24:16.836402 3220 state_mem.go:35] "Initializing new in-memory state store" Aug 5 22:24:16.836673 kubelet[3220]: I0805 22:24:16.836648 3220 state_mem.go:75] "Updated machine memory state" Aug 5 22:24:16.842094 kubelet[3220]: I0805 22:24:16.841692 3220 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 5 22:24:16.842094 kubelet[3220]: I0805 22:24:16.841941 3220 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 5 22:24:16.857955 kubelet[3220]: I0805 22:24:16.857632 3220 kubelet_node_status.go:70] "Attempting to register node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.858830 kubelet[3220]: I0805 22:24:16.857835 3220 topology_manager.go:215] "Topology Admit Handler" podUID="12952a4050cc6a99afa4a56f2f22cadf" podNamespace="kube-system" podName="kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.859028 kubelet[3220]: I0805 22:24:16.858926 3220 topology_manager.go:215] "Topology Admit Handler" podUID="e4606093fc86ea680d78aa5f501c57b3" podNamespace="kube-system" podName="kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.859028 kubelet[3220]: I0805 22:24:16.858979 3220 topology_manager.go:215] "Topology Admit Handler" podUID="2dce664c8d360756d63b4b8eba3e26a8" podNamespace="kube-system" podName="kube-scheduler-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.868917 kubelet[3220]: W0805 22:24:16.868799 3220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:24:16.869486 kubelet[3220]: W0805 22:24:16.869163 3220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:24:16.870442 kubelet[3220]: W0805 22:24:16.870327 3220 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Aug 5 22:24:16.872291 kubelet[3220]: I0805 22:24:16.872152 3220 kubelet_node_status.go:108] "Node was previously registered" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.872291 kubelet[3220]: I0805 22:24:16.872230 3220 kubelet_node_status.go:73] "Successfully registered node" node="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955250 kubelet[3220]: I0805 22:24:16.955204 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-k8s-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955250 kubelet[3220]: I0805 22:24:16.955261 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955512 kubelet[3220]: I0805 22:24:16.955290 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2dce664c8d360756d63b4b8eba3e26a8-kubeconfig\") pod \"kube-scheduler-ci-4012.1.0-a-bfd2eb4520\" (UID: \"2dce664c8d360756d63b4b8eba3e26a8\") " pod="kube-system/kube-scheduler-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955512 kubelet[3220]: I0805 22:24:16.955311 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/12952a4050cc6a99afa4a56f2f22cadf-ca-certs\") pod \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" (UID: \"12952a4050cc6a99afa4a56f2f22cadf\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955512 kubelet[3220]: I0805 22:24:16.955337 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/12952a4050cc6a99afa4a56f2f22cadf-k8s-certs\") pod \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" (UID: \"12952a4050cc6a99afa4a56f2f22cadf\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955512 kubelet[3220]: I0805 22:24:16.955382 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/12952a4050cc6a99afa4a56f2f22cadf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4012.1.0-a-bfd2eb4520\" (UID: \"12952a4050cc6a99afa4a56f2f22cadf\") " pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955512 kubelet[3220]: I0805 22:24:16.955407 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-ca-certs\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955680 kubelet[3220]: I0805 22:24:16.955437 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-flexvolume-dir\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:16.955680 kubelet[3220]: I0805 22:24:16.955495 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e4606093fc86ea680d78aa5f501c57b3-kubeconfig\") pod \"kube-controller-manager-ci-4012.1.0-a-bfd2eb4520\" (UID: \"e4606093fc86ea680d78aa5f501c57b3\") " pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" Aug 5 22:24:17.735371 kubelet[3220]: I0805 22:24:17.735319 3220 apiserver.go:52] "Watching apiserver" Aug 5 22:24:17.754238 kubelet[3220]: I0805 22:24:17.754200 3220 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Aug 5 22:24:17.821752 kubelet[3220]: I0805 22:24:17.821694 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4012.1.0-a-bfd2eb4520" podStartSLOduration=1.8216508710000001 podCreationTimestamp="2024-08-05 22:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:24:17.821645671 +0000 UTC m=+1.142776602" watchObservedRunningTime="2024-08-05 22:24:17.821650871 +0000 UTC m=+1.142781802" Aug 5 22:24:17.844169 kubelet[3220]: I0805 22:24:17.843938 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4012.1.0-a-bfd2eb4520" podStartSLOduration=1.843892224 podCreationTimestamp="2024-08-05 22:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:24:17.843444319 +0000 UTC m=+1.164575350" watchObservedRunningTime="2024-08-05 22:24:17.843892224 +0000 UTC m=+1.165023255" Aug 5 22:24:17.844169 kubelet[3220]: I0805 22:24:17.844089 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4012.1.0-a-bfd2eb4520" podStartSLOduration=1.844064526 podCreationTimestamp="2024-08-05 22:24:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:24:17.830990078 +0000 UTC m=+1.152121109" watchObservedRunningTime="2024-08-05 22:24:17.844064526 +0000 UTC m=+1.165195557" Aug 5 22:24:23.961477 sudo[2345]: pam_unix(sudo:session): session closed for user root Aug 5 22:24:24.059392 sshd[2342]: pam_unix(sshd:session): session closed for user core Aug 5 22:24:24.064701 systemd-logind[1650]: Session 9 logged out. Waiting for processes to exit. Aug 5 22:24:24.065830 systemd[1]: sshd@6-10.200.4.17:22-10.200.16.10:37422.service: Deactivated successfully. Aug 5 22:24:24.069159 systemd[1]: session-9.scope: Deactivated successfully. Aug 5 22:24:24.069399 systemd[1]: session-9.scope: Consumed 4.040s CPU time, 140.1M memory peak, 0B memory swap peak. Aug 5 22:24:24.070537 systemd-logind[1650]: Removed session 9. Aug 5 22:24:28.668838 kubelet[3220]: I0805 22:24:28.668805 3220 kuberuntime_manager.go:1528] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 5 22:24:28.669345 containerd[1681]: time="2024-08-05T22:24:28.669245850Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 5 22:24:28.669823 kubelet[3220]: I0805 22:24:28.669484 3220 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 5 22:24:29.207887 kubelet[3220]: I0805 22:24:29.207017 3220 topology_manager.go:215] "Topology Admit Handler" podUID="6734d40d-b206-4cf6-b4ec-b892ed6396f5" podNamespace="kube-system" podName="kube-proxy-rhhph" Aug 5 22:24:29.218499 systemd[1]: Created slice kubepods-besteffort-pod6734d40d_b206_4cf6_b4ec_b892ed6396f5.slice - libcontainer container kubepods-besteffort-pod6734d40d_b206_4cf6_b4ec_b892ed6396f5.slice. Aug 5 22:24:29.268733 kubelet[3220]: I0805 22:24:29.268562 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6734d40d-b206-4cf6-b4ec-b892ed6396f5-lib-modules\") pod \"kube-proxy-rhhph\" (UID: \"6734d40d-b206-4cf6-b4ec-b892ed6396f5\") " pod="kube-system/kube-proxy-rhhph" Aug 5 22:24:29.268733 kubelet[3220]: I0805 22:24:29.268630 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49nw\" (UniqueName: \"kubernetes.io/projected/6734d40d-b206-4cf6-b4ec-b892ed6396f5-kube-api-access-v49nw\") pod \"kube-proxy-rhhph\" (UID: \"6734d40d-b206-4cf6-b4ec-b892ed6396f5\") " pod="kube-system/kube-proxy-rhhph" Aug 5 22:24:29.268733 kubelet[3220]: I0805 22:24:29.268660 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6734d40d-b206-4cf6-b4ec-b892ed6396f5-kube-proxy\") pod \"kube-proxy-rhhph\" (UID: \"6734d40d-b206-4cf6-b4ec-b892ed6396f5\") " pod="kube-system/kube-proxy-rhhph" Aug 5 22:24:29.268733 kubelet[3220]: I0805 22:24:29.268692 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6734d40d-b206-4cf6-b4ec-b892ed6396f5-xtables-lock\") pod \"kube-proxy-rhhph\" (UID: \"6734d40d-b206-4cf6-b4ec-b892ed6396f5\") " pod="kube-system/kube-proxy-rhhph" Aug 5 22:24:29.374658 kubelet[3220]: E0805 22:24:29.374603 3220 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 5 22:24:29.374658 kubelet[3220]: E0805 22:24:29.374633 3220 projected.go:198] Error preparing data for projected volume kube-api-access-v49nw for pod kube-system/kube-proxy-rhhph: configmap "kube-root-ca.crt" not found Aug 5 22:24:29.374875 kubelet[3220]: E0805 22:24:29.374717 3220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6734d40d-b206-4cf6-b4ec-b892ed6396f5-kube-api-access-v49nw podName:6734d40d-b206-4cf6-b4ec-b892ed6396f5 nodeName:}" failed. No retries permitted until 2024-08-05 22:24:29.874684277 +0000 UTC m=+13.195815208 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v49nw" (UniqueName: "kubernetes.io/projected/6734d40d-b206-4cf6-b4ec-b892ed6396f5-kube-api-access-v49nw") pod "kube-proxy-rhhph" (UID: "6734d40d-b206-4cf6-b4ec-b892ed6396f5") : configmap "kube-root-ca.crt" not found Aug 5 22:24:29.695689 kubelet[3220]: I0805 22:24:29.695100 3220 topology_manager.go:215] "Topology Admit Handler" podUID="28eee6ef-e73e-45fa-892e-b5d48a062c5d" podNamespace="tigera-operator" podName="tigera-operator-76c4974c85-nc6zj" Aug 5 22:24:29.706153 systemd[1]: Created slice kubepods-besteffort-pod28eee6ef_e73e_45fa_892e_b5d48a062c5d.slice - libcontainer container kubepods-besteffort-pod28eee6ef_e73e_45fa_892e_b5d48a062c5d.slice. Aug 5 22:24:29.772822 kubelet[3220]: I0805 22:24:29.772772 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jsfff\" (UniqueName: \"kubernetes.io/projected/28eee6ef-e73e-45fa-892e-b5d48a062c5d-kube-api-access-jsfff\") pod \"tigera-operator-76c4974c85-nc6zj\" (UID: \"28eee6ef-e73e-45fa-892e-b5d48a062c5d\") " pod="tigera-operator/tigera-operator-76c4974c85-nc6zj" Aug 5 22:24:29.773028 kubelet[3220]: I0805 22:24:29.772870 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/28eee6ef-e73e-45fa-892e-b5d48a062c5d-var-lib-calico\") pod \"tigera-operator-76c4974c85-nc6zj\" (UID: \"28eee6ef-e73e-45fa-892e-b5d48a062c5d\") " pod="tigera-operator/tigera-operator-76c4974c85-nc6zj" Aug 5 22:24:30.013674 containerd[1681]: time="2024-08-05T22:24:30.013632050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-nc6zj,Uid:28eee6ef-e73e-45fa-892e-b5d48a062c5d,Namespace:tigera-operator,Attempt:0,}" Aug 5 22:24:30.060962 containerd[1681]: time="2024-08-05T22:24:30.060854995Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:30.060962 containerd[1681]: time="2024-08-05T22:24:30.060896795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:30.060962 containerd[1681]: time="2024-08-05T22:24:30.060924896Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:30.061285 containerd[1681]: time="2024-08-05T22:24:30.060965596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:30.089629 systemd[1]: Started cri-containerd-4c0b42bdacf979d84ecc14d06cca29e28bae4d349e2f7d8e7bca6697671e72ff.scope - libcontainer container 4c0b42bdacf979d84ecc14d06cca29e28bae4d349e2f7d8e7bca6697671e72ff. Aug 5 22:24:30.127355 containerd[1681]: time="2024-08-05T22:24:30.127304262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhhph,Uid:6734d40d-b206-4cf6-b4ec-b892ed6396f5,Namespace:kube-system,Attempt:0,}" Aug 5 22:24:30.133319 containerd[1681]: time="2024-08-05T22:24:30.133281031Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-76c4974c85-nc6zj,Uid:28eee6ef-e73e-45fa-892e-b5d48a062c5d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4c0b42bdacf979d84ecc14d06cca29e28bae4d349e2f7d8e7bca6697671e72ff\"" Aug 5 22:24:30.135951 containerd[1681]: time="2024-08-05T22:24:30.135917961Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\"" Aug 5 22:24:30.169582 containerd[1681]: time="2024-08-05T22:24:30.169284546Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:30.169582 containerd[1681]: time="2024-08-05T22:24:30.169352347Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:30.169582 containerd[1681]: time="2024-08-05T22:24:30.169428648Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:30.169582 containerd[1681]: time="2024-08-05T22:24:30.169480148Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:30.198890 systemd[1]: Started cri-containerd-367facebc5888bc037e844d849c78f099f96a605859d77ba50068c02a1530b68.scope - libcontainer container 367facebc5888bc037e844d849c78f099f96a605859d77ba50068c02a1530b68. Aug 5 22:24:30.220924 containerd[1681]: time="2024-08-05T22:24:30.220885742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhhph,Uid:6734d40d-b206-4cf6-b4ec-b892ed6396f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"367facebc5888bc037e844d849c78f099f96a605859d77ba50068c02a1530b68\"" Aug 5 22:24:30.223731 containerd[1681]: time="2024-08-05T22:24:30.223597773Z" level=info msg="CreateContainer within sandbox \"367facebc5888bc037e844d849c78f099f96a605859d77ba50068c02a1530b68\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 5 22:24:30.270492 containerd[1681]: time="2024-08-05T22:24:30.270357212Z" level=info msg="CreateContainer within sandbox \"367facebc5888bc037e844d849c78f099f96a605859d77ba50068c02a1530b68\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"396deb536ad0a17ef6d58fd281cf9bc281959cedd865456c79b067f2f20ebcc0\"" Aug 5 22:24:30.271699 containerd[1681]: time="2024-08-05T22:24:30.271661927Z" level=info msg="StartContainer for \"396deb536ad0a17ef6d58fd281cf9bc281959cedd865456c79b067f2f20ebcc0\"" Aug 5 22:24:30.298648 systemd[1]: Started cri-containerd-396deb536ad0a17ef6d58fd281cf9bc281959cedd865456c79b067f2f20ebcc0.scope - libcontainer container 396deb536ad0a17ef6d58fd281cf9bc281959cedd865456c79b067f2f20ebcc0. Aug 5 22:24:30.337825 containerd[1681]: time="2024-08-05T22:24:30.337780790Z" level=info msg="StartContainer for \"396deb536ad0a17ef6d58fd281cf9bc281959cedd865456c79b067f2f20ebcc0\" returns successfully" Aug 5 22:24:32.122625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1696117981.mount: Deactivated successfully. Aug 5 22:24:32.706843 containerd[1681]: time="2024-08-05T22:24:32.706790927Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.34.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:32.708726 containerd[1681]: time="2024-08-05T22:24:32.708672749Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.34.0: active requests=0, bytes read=22076084" Aug 5 22:24:32.714024 containerd[1681]: time="2024-08-05T22:24:32.713995610Z" level=info msg="ImageCreate event name:\"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:32.718428 containerd[1681]: time="2024-08-05T22:24:32.718217759Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:32.719064 containerd[1681]: time="2024-08-05T22:24:32.719030768Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.34.0\" with image id \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\", repo tag \"quay.io/tigera/operator:v1.34.0\", repo digest \"quay.io/tigera/operator@sha256:479ddc7ff9ab095058b96f6710bbf070abada86332e267d6e5dcc1df36ba2cc5\", size \"22070263\" in 2.583076007s" Aug 5 22:24:32.719146 containerd[1681]: time="2024-08-05T22:24:32.719068969Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.34.0\" returns image reference \"sha256:01249e32d0f6f7d0ad79761d634d16738f1a5792b893f202f9a417c63034411d\"" Aug 5 22:24:32.721420 containerd[1681]: time="2024-08-05T22:24:32.721207893Z" level=info msg="CreateContainer within sandbox \"4c0b42bdacf979d84ecc14d06cca29e28bae4d349e2f7d8e7bca6697671e72ff\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 5 22:24:32.752988 containerd[1681]: time="2024-08-05T22:24:32.752948060Z" level=info msg="CreateContainer within sandbox \"4c0b42bdacf979d84ecc14d06cca29e28bae4d349e2f7d8e7bca6697671e72ff\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"a5100fb79ab97ee9ebf2b0ccf9558e4c77479480f08adc39ab840164ea9b0465\"" Aug 5 22:24:32.755664 containerd[1681]: time="2024-08-05T22:24:32.754576678Z" level=info msg="StartContainer for \"a5100fb79ab97ee9ebf2b0ccf9558e4c77479480f08adc39ab840164ea9b0465\"" Aug 5 22:24:32.785658 systemd[1]: Started cri-containerd-a5100fb79ab97ee9ebf2b0ccf9558e4c77479480f08adc39ab840164ea9b0465.scope - libcontainer container a5100fb79ab97ee9ebf2b0ccf9558e4c77479480f08adc39ab840164ea9b0465. Aug 5 22:24:32.818049 containerd[1681]: time="2024-08-05T22:24:32.818000410Z" level=info msg="StartContainer for \"a5100fb79ab97ee9ebf2b0ccf9558e4c77479480f08adc39ab840164ea9b0465\" returns successfully" Aug 5 22:24:32.845329 kubelet[3220]: I0805 22:24:32.845296 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-rhhph" podStartSLOduration=3.843800308 podCreationTimestamp="2024-08-05 22:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:24:30.837628858 +0000 UTC m=+14.158759789" watchObservedRunningTime="2024-08-05 22:24:32.843800308 +0000 UTC m=+16.164931239" Aug 5 22:24:35.686014 kubelet[3220]: I0805 22:24:35.685967 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="tigera-operator/tigera-operator-76c4974c85-nc6zj" podStartSLOduration=4.100469869 podCreationTimestamp="2024-08-05 22:24:29 +0000 UTC" firstStartedPulling="2024-08-05 22:24:30.134385843 +0000 UTC m=+13.455516874" lastFinishedPulling="2024-08-05 22:24:32.719843678 +0000 UTC m=+16.040974609" observedRunningTime="2024-08-05 22:24:32.846498539 +0000 UTC m=+16.167629470" watchObservedRunningTime="2024-08-05 22:24:35.685927604 +0000 UTC m=+19.007058535" Aug 5 22:24:35.686603 kubelet[3220]: I0805 22:24:35.686155 3220 topology_manager.go:215] "Topology Admit Handler" podUID="4395fd9f-62b8-499d-8e51-43ae1250fe61" podNamespace="calico-system" podName="calico-typha-586d96d4d4-z7xcq" Aug 5 22:24:35.699611 systemd[1]: Created slice kubepods-besteffort-pod4395fd9f_62b8_499d_8e51_43ae1250fe61.slice - libcontainer container kubepods-besteffort-pod4395fd9f_62b8_499d_8e51_43ae1250fe61.slice. Aug 5 22:24:35.770244 kubelet[3220]: I0805 22:24:35.770196 3220 topology_manager.go:215] "Topology Admit Handler" podUID="6caeba7c-256a-4e97-8143-e269aee50045" podNamespace="calico-system" podName="calico-node-2kjtm" Aug 5 22:24:35.780320 systemd[1]: Created slice kubepods-besteffort-pod6caeba7c_256a_4e97_8143_e269aee50045.slice - libcontainer container kubepods-besteffort-pod6caeba7c_256a_4e97_8143_e269aee50045.slice. Aug 5 22:24:35.812427 kubelet[3220]: I0805 22:24:35.812399 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4395fd9f-62b8-499d-8e51-43ae1250fe61-tigera-ca-bundle\") pod \"calico-typha-586d96d4d4-z7xcq\" (UID: \"4395fd9f-62b8-499d-8e51-43ae1250fe61\") " pod="calico-system/calico-typha-586d96d4d4-z7xcq" Aug 5 22:24:35.812427 kubelet[3220]: I0805 22:24:35.812445 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-xtables-lock\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812427 kubelet[3220]: I0805 22:24:35.812492 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-cni-log-dir\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812427 kubelet[3220]: I0805 22:24:35.812536 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-flexvol-driver-host\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812427 kubelet[3220]: I0805 22:24:35.812586 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-policysync\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812989 kubelet[3220]: I0805 22:24:35.812609 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-cni-bin-dir\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812989 kubelet[3220]: I0805 22:24:35.812628 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/6caeba7c-256a-4e97-8143-e269aee50045-node-certs\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812989 kubelet[3220]: I0805 22:24:35.812671 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-var-lib-calico\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.812989 kubelet[3220]: I0805 22:24:35.812757 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4395fd9f-62b8-499d-8e51-43ae1250fe61-typha-certs\") pod \"calico-typha-586d96d4d4-z7xcq\" (UID: \"4395fd9f-62b8-499d-8e51-43ae1250fe61\") " pod="calico-system/calico-typha-586d96d4d4-z7xcq" Aug 5 22:24:35.812989 kubelet[3220]: I0805 22:24:35.812797 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vldk4\" (UniqueName: \"kubernetes.io/projected/6caeba7c-256a-4e97-8143-e269aee50045-kube-api-access-vldk4\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.813169 kubelet[3220]: I0805 22:24:35.812826 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-cni-net-dir\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.813169 kubelet[3220]: I0805 22:24:35.812866 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pc8pc\" (UniqueName: \"kubernetes.io/projected/4395fd9f-62b8-499d-8e51-43ae1250fe61-kube-api-access-pc8pc\") pod \"calico-typha-586d96d4d4-z7xcq\" (UID: \"4395fd9f-62b8-499d-8e51-43ae1250fe61\") " pod="calico-system/calico-typha-586d96d4d4-z7xcq" Aug 5 22:24:35.813169 kubelet[3220]: I0805 22:24:35.812907 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-lib-modules\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.813169 kubelet[3220]: I0805 22:24:35.812943 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6caeba7c-256a-4e97-8143-e269aee50045-tigera-ca-bundle\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.813169 kubelet[3220]: I0805 22:24:35.812971 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/6caeba7c-256a-4e97-8143-e269aee50045-var-run-calico\") pod \"calico-node-2kjtm\" (UID: \"6caeba7c-256a-4e97-8143-e269aee50045\") " pod="calico-system/calico-node-2kjtm" Aug 5 22:24:35.884130 kubelet[3220]: I0805 22:24:35.884090 3220 topology_manager.go:215] "Topology Admit Handler" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" podNamespace="calico-system" podName="csi-node-driver-bg5zn" Aug 5 22:24:35.884457 kubelet[3220]: E0805 22:24:35.884438 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:35.913893 kubelet[3220]: I0805 22:24:35.913604 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5d7f5978-577b-47bc-9e09-7fc8851b40e1-kubelet-dir\") pod \"csi-node-driver-bg5zn\" (UID: \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\") " pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:35.915915 kubelet[3220]: I0805 22:24:35.914965 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5d7f5978-577b-47bc-9e09-7fc8851b40e1-socket-dir\") pod \"csi-node-driver-bg5zn\" (UID: \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\") " pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:35.915915 kubelet[3220]: I0805 22:24:35.915067 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5d7f5978-577b-47bc-9e09-7fc8851b40e1-registration-dir\") pod \"csi-node-driver-bg5zn\" (UID: \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\") " pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:35.915915 kubelet[3220]: I0805 22:24:35.915132 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rgfpl\" (UniqueName: \"kubernetes.io/projected/5d7f5978-577b-47bc-9e09-7fc8851b40e1-kube-api-access-rgfpl\") pod \"csi-node-driver-bg5zn\" (UID: \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\") " pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:35.915915 kubelet[3220]: I0805 22:24:35.915243 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5d7f5978-577b-47bc-9e09-7fc8851b40e1-varrun\") pod \"csi-node-driver-bg5zn\" (UID: \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\") " pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:35.932596 kubelet[3220]: E0805 22:24:35.932573 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:35.933736 kubelet[3220]: W0805 22:24:35.933704 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:35.933976 kubelet[3220]: E0805 22:24:35.933962 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:35.962557 kubelet[3220]: E0805 22:24:35.962285 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:35.963034 kubelet[3220]: W0805 22:24:35.962917 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:35.963381 kubelet[3220]: E0805 22:24:35.963139 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:35.964491 kubelet[3220]: E0805 22:24:35.964249 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:35.964491 kubelet[3220]: W0805 22:24:35.964264 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:35.964491 kubelet[3220]: E0805 22:24:35.964284 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.006600 containerd[1681]: time="2024-08-05T22:24:36.006532503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-586d96d4d4-z7xcq,Uid:4395fd9f-62b8-499d-8e51-43ae1250fe61,Namespace:calico-system,Attempt:0,}" Aug 5 22:24:36.016594 kubelet[3220]: E0805 22:24:36.015979 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.016594 kubelet[3220]: W0805 22:24:36.016595 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.016772 kubelet[3220]: E0805 22:24:36.016627 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.017157 kubelet[3220]: E0805 22:24:36.017137 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.017157 kubelet[3220]: W0805 22:24:36.017150 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.017476 kubelet[3220]: E0805 22:24:36.017172 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.017793 kubelet[3220]: E0805 22:24:36.017774 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.017793 kubelet[3220]: W0805 22:24:36.017789 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.017939 kubelet[3220]: E0805 22:24:36.017811 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.018918 kubelet[3220]: E0805 22:24:36.018709 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.018918 kubelet[3220]: W0805 22:24:36.018724 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.018918 kubelet[3220]: E0805 22:24:36.018746 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019052 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.019970 kubelet[3220]: W0805 22:24:36.019063 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019247 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.019970 kubelet[3220]: W0805 22:24:36.019258 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019440 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.019970 kubelet[3220]: W0805 22:24:36.019468 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019677 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019758 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019776 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.019970 kubelet[3220]: E0805 22:24:36.019811 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.020483 kubelet[3220]: W0805 22:24:36.019818 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.020483 kubelet[3220]: E0805 22:24:36.020001 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.020483 kubelet[3220]: E0805 22:24:36.020070 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.020483 kubelet[3220]: W0805 22:24:36.020078 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.020483 kubelet[3220]: E0805 22:24:36.020104 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.020483 kubelet[3220]: E0805 22:24:36.020318 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.020483 kubelet[3220]: W0805 22:24:36.020328 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.020553 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.022639 kubelet[3220]: W0805 22:24:36.020563 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.020581 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.020752 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.022639 kubelet[3220]: W0805 22:24:36.020762 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.020776 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.021022 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.022639 kubelet[3220]: W0805 22:24:36.021034 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.021051 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.022639 kubelet[3220]: E0805 22:24:36.021316 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.026578 kubelet[3220]: W0805 22:24:36.021327 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.026578 kubelet[3220]: E0805 22:24:36.021361 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.026578 kubelet[3220]: E0805 22:24:36.021777 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.026578 kubelet[3220]: E0805 22:24:36.021881 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.026578 kubelet[3220]: W0805 22:24:36.021907 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.026578 kubelet[3220]: E0805 22:24:36.022008 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.026578 kubelet[3220]: E0805 22:24:36.022661 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.026578 kubelet[3220]: W0805 22:24:36.022674 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.026578 kubelet[3220]: E0805 22:24:36.023302 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.026578 kubelet[3220]: W0805 22:24:36.023316 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.023559 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.023592 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.023798 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.028000 kubelet[3220]: W0805 22:24:36.023813 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.023881 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.024109 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.028000 kubelet[3220]: W0805 22:24:36.024120 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.024234 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028000 kubelet[3220]: E0805 22:24:36.024563 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.028000 kubelet[3220]: W0805 22:24:36.024574 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.024592 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.024845 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.028653 kubelet[3220]: W0805 22:24:36.024856 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.024897 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.025188 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.028653 kubelet[3220]: W0805 22:24:36.025199 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.025215 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.025539 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.028653 kubelet[3220]: W0805 22:24:36.025550 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.028653 kubelet[3220]: E0805 22:24:36.025567 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.029245 kubelet[3220]: E0805 22:24:36.026077 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.029245 kubelet[3220]: W0805 22:24:36.026089 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.029245 kubelet[3220]: E0805 22:24:36.026105 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.029245 kubelet[3220]: E0805 22:24:36.026393 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.029245 kubelet[3220]: W0805 22:24:36.026407 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.029245 kubelet[3220]: E0805 22:24:36.026442 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.037697 kubelet[3220]: E0805 22:24:36.037516 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:36.037697 kubelet[3220]: W0805 22:24:36.037532 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:36.037697 kubelet[3220]: E0805 22:24:36.037550 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:36.061838 containerd[1681]: time="2024-08-05T22:24:36.061744840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:36.063585 containerd[1681]: time="2024-08-05T22:24:36.062393748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:36.063585 containerd[1681]: time="2024-08-05T22:24:36.062424548Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:36.063585 containerd[1681]: time="2024-08-05T22:24:36.062437448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:36.085841 containerd[1681]: time="2024-08-05T22:24:36.084885507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2kjtm,Uid:6caeba7c-256a-4e97-8143-e269aee50045,Namespace:calico-system,Attempt:0,}" Aug 5 22:24:36.087731 systemd[1]: Started cri-containerd-f768e1a2bdb17b93e4453fd1dca003565a20d2bf785d65d6643dce8b1b3a0340.scope - libcontainer container f768e1a2bdb17b93e4453fd1dca003565a20d2bf785d65d6643dce8b1b3a0340. Aug 5 22:24:36.143443 containerd[1681]: time="2024-08-05T22:24:36.142495872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:24:36.143443 containerd[1681]: time="2024-08-05T22:24:36.142548173Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:36.143443 containerd[1681]: time="2024-08-05T22:24:36.142567773Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:24:36.143443 containerd[1681]: time="2024-08-05T22:24:36.142579573Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:24:36.169642 systemd[1]: Started cri-containerd-44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c.scope - libcontainer container 44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c. Aug 5 22:24:36.227843 containerd[1681]: time="2024-08-05T22:24:36.226772745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-586d96d4d4-z7xcq,Uid:4395fd9f-62b8-499d-8e51-43ae1250fe61,Namespace:calico-system,Attempt:0,} returns sandbox id \"f768e1a2bdb17b93e4453fd1dca003565a20d2bf785d65d6643dce8b1b3a0340\"" Aug 5 22:24:36.231636 containerd[1681]: time="2024-08-05T22:24:36.231590200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\"" Aug 5 22:24:36.261689 containerd[1681]: time="2024-08-05T22:24:36.261543646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2kjtm,Uid:6caeba7c-256a-4e97-8143-e269aee50045,Namespace:calico-system,Attempt:0,} returns sandbox id \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\"" Aug 5 22:24:37.759784 kubelet[3220]: E0805 22:24:37.759747 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:38.361700 containerd[1681]: time="2024-08-05T22:24:38.361571207Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:38.364301 containerd[1681]: time="2024-08-05T22:24:38.364248437Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.28.0: active requests=0, bytes read=29458030" Aug 5 22:24:38.367550 containerd[1681]: time="2024-08-05T22:24:38.367515674Z" level=info msg="ImageCreate event name:\"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:38.374935 containerd[1681]: time="2024-08-05T22:24:38.374883058Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:38.375679 containerd[1681]: time="2024-08-05T22:24:38.375640467Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.28.0\" with image id \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\", repo tag \"ghcr.io/flatcar/calico/typha:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:eff1501af12b7e27e2ef8f4e55d03d837bcb017aa5663e22e519059c452d51ed\", size \"30905782\" in 2.143905565s" Aug 5 22:24:38.375920 containerd[1681]: time="2024-08-05T22:24:38.375677368Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.28.0\" returns image reference \"sha256:a9372c0f51b54c589e5a16013ed3049b2a052dd6903d72603849fab2c4216fbc\"" Aug 5 22:24:38.377345 containerd[1681]: time="2024-08-05T22:24:38.376854281Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\"" Aug 5 22:24:38.396230 containerd[1681]: time="2024-08-05T22:24:38.396185701Z" level=info msg="CreateContainer within sandbox \"f768e1a2bdb17b93e4453fd1dca003565a20d2bf785d65d6643dce8b1b3a0340\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 5 22:24:38.438965 containerd[1681]: time="2024-08-05T22:24:38.438857388Z" level=info msg="CreateContainer within sandbox \"f768e1a2bdb17b93e4453fd1dca003565a20d2bf785d65d6643dce8b1b3a0340\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f94f5ca450412b801c86d08b13e72c65e2b14c6f994950c932c9eb15d8edb42d\"" Aug 5 22:24:38.439810 containerd[1681]: time="2024-08-05T22:24:38.439776798Z" level=info msg="StartContainer for \"f94f5ca450412b801c86d08b13e72c65e2b14c6f994950c932c9eb15d8edb42d\"" Aug 5 22:24:38.486589 systemd[1]: Started cri-containerd-f94f5ca450412b801c86d08b13e72c65e2b14c6f994950c932c9eb15d8edb42d.scope - libcontainer container f94f5ca450412b801c86d08b13e72c65e2b14c6f994950c932c9eb15d8edb42d. Aug 5 22:24:38.540472 containerd[1681]: time="2024-08-05T22:24:38.540395345Z" level=info msg="StartContainer for \"f94f5ca450412b801c86d08b13e72c65e2b14c6f994950c932c9eb15d8edb42d\" returns successfully" Aug 5 22:24:38.859224 kubelet[3220]: I0805 22:24:38.859184 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-typha-586d96d4d4-z7xcq" podStartSLOduration=1.713410192 podCreationTimestamp="2024-08-05 22:24:35 +0000 UTC" firstStartedPulling="2024-08-05 22:24:36.230744891 +0000 UTC m=+19.551875822" lastFinishedPulling="2024-08-05 22:24:38.376469077 +0000 UTC m=+21.697600108" observedRunningTime="2024-08-05 22:24:38.858548072 +0000 UTC m=+22.179679003" watchObservedRunningTime="2024-08-05 22:24:38.859134478 +0000 UTC m=+22.180265509" Aug 5 22:24:38.907159 kubelet[3220]: E0805 22:24:38.906477 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.907159 kubelet[3220]: W0805 22:24:38.906504 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.907159 kubelet[3220]: E0805 22:24:38.907006 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.907701 kubelet[3220]: E0805 22:24:38.907310 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.907701 kubelet[3220]: W0805 22:24:38.907321 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.907701 kubelet[3220]: E0805 22:24:38.907341 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.908357 kubelet[3220]: E0805 22:24:38.907982 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.908357 kubelet[3220]: W0805 22:24:38.907996 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.908357 kubelet[3220]: E0805 22:24:38.908016 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.908357 kubelet[3220]: E0805 22:24:38.908230 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.908357 kubelet[3220]: W0805 22:24:38.908241 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.908357 kubelet[3220]: E0805 22:24:38.908258 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.909100 kubelet[3220]: E0805 22:24:38.908806 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.909100 kubelet[3220]: W0805 22:24:38.908820 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.909100 kubelet[3220]: E0805 22:24:38.908837 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.909100 kubelet[3220]: E0805 22:24:38.909032 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.909100 kubelet[3220]: W0805 22:24:38.909043 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.909100 kubelet[3220]: E0805 22:24:38.909059 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.909864 kubelet[3220]: E0805 22:24:38.909561 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.909864 kubelet[3220]: W0805 22:24:38.909575 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.909864 kubelet[3220]: E0805 22:24:38.909591 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.909864 kubelet[3220]: E0805 22:24:38.909785 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.909864 kubelet[3220]: W0805 22:24:38.909795 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.909864 kubelet[3220]: E0805 22:24:38.909810 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.910527 kubelet[3220]: E0805 22:24:38.910308 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.910527 kubelet[3220]: W0805 22:24:38.910321 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.910527 kubelet[3220]: E0805 22:24:38.910337 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.910967 kubelet[3220]: E0805 22:24:38.910750 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.910967 kubelet[3220]: W0805 22:24:38.910780 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.910967 kubelet[3220]: E0805 22:24:38.910800 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.911398 kubelet[3220]: E0805 22:24:38.911190 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.911398 kubelet[3220]: W0805 22:24:38.911211 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.911398 kubelet[3220]: E0805 22:24:38.911228 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.911783 kubelet[3220]: E0805 22:24:38.911644 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.911783 kubelet[3220]: W0805 22:24:38.911656 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.911783 kubelet[3220]: E0805 22:24:38.911673 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.912159 kubelet[3220]: E0805 22:24:38.912039 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.912159 kubelet[3220]: W0805 22:24:38.912053 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.912159 kubelet[3220]: E0805 22:24:38.912070 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.912688 kubelet[3220]: E0805 22:24:38.912441 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.912688 kubelet[3220]: W0805 22:24:38.912506 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.912688 kubelet[3220]: E0805 22:24:38.912528 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.912994 kubelet[3220]: E0805 22:24:38.912903 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.912994 kubelet[3220]: W0805 22:24:38.912916 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.912994 kubelet[3220]: E0805 22:24:38.912942 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.940979 kubelet[3220]: E0805 22:24:38.940781 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.940979 kubelet[3220]: W0805 22:24:38.940804 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.940979 kubelet[3220]: E0805 22:24:38.940833 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.942045 kubelet[3220]: E0805 22:24:38.941874 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.942045 kubelet[3220]: W0805 22:24:38.941891 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.942045 kubelet[3220]: E0805 22:24:38.941910 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.942475 kubelet[3220]: E0805 22:24:38.942168 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.942475 kubelet[3220]: W0805 22:24:38.942179 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.942475 kubelet[3220]: E0805 22:24:38.942196 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.942847 kubelet[3220]: E0805 22:24:38.942814 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.942847 kubelet[3220]: W0805 22:24:38.942829 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.943244 kubelet[3220]: E0805 22:24:38.943037 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.943466 kubelet[3220]: E0805 22:24:38.943373 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.943466 kubelet[3220]: W0805 22:24:38.943386 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.943466 kubelet[3220]: E0805 22:24:38.943414 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.943956 kubelet[3220]: E0805 22:24:38.943824 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.943956 kubelet[3220]: W0805 22:24:38.943838 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.943956 kubelet[3220]: E0805 22:24:38.943868 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.944563 kubelet[3220]: E0805 22:24:38.944286 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.944563 kubelet[3220]: W0805 22:24:38.944300 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.944563 kubelet[3220]: E0805 22:24:38.944351 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.945150 kubelet[3220]: E0805 22:24:38.945024 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.945150 kubelet[3220]: W0805 22:24:38.945038 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.945377 kubelet[3220]: E0805 22:24:38.945356 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.945561 kubelet[3220]: E0805 22:24:38.945547 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.945561 kubelet[3220]: W0805 22:24:38.945559 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.945669 kubelet[3220]: E0805 22:24:38.945653 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.945911 kubelet[3220]: E0805 22:24:38.945808 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.945911 kubelet[3220]: W0805 22:24:38.945820 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.945911 kubelet[3220]: E0805 22:24:38.945850 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.946190 kubelet[3220]: E0805 22:24:38.946138 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.946372 kubelet[3220]: W0805 22:24:38.946329 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.946731 kubelet[3220]: E0805 22:24:38.946578 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.947206 kubelet[3220]: E0805 22:24:38.947166 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.947206 kubelet[3220]: W0805 22:24:38.947180 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.947689 kubelet[3220]: E0805 22:24:38.947663 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.948087 kubelet[3220]: E0805 22:24:38.947948 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.948087 kubelet[3220]: W0805 22:24:38.947994 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.948087 kubelet[3220]: E0805 22:24:38.948016 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.948484 kubelet[3220]: E0805 22:24:38.948464 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.948658 kubelet[3220]: W0805 22:24:38.948570 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.948658 kubelet[3220]: E0805 22:24:38.948612 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.949263 kubelet[3220]: E0805 22:24:38.949136 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.949263 kubelet[3220]: W0805 22:24:38.949148 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.949987 kubelet[3220]: E0805 22:24:38.949880 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.950109 kubelet[3220]: W0805 22:24:38.950094 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.950242 kubelet[3220]: E0805 22:24:38.950181 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.950570 kubelet[3220]: E0805 22:24:38.950547 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.951055 kubelet[3220]: E0805 22:24:38.951038 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.951055 kubelet[3220]: W0805 22:24:38.951051 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.951159 kubelet[3220]: E0805 22:24:38.951068 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:38.961160 kubelet[3220]: E0805 22:24:38.961134 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:38.961160 kubelet[3220]: W0805 22:24:38.961152 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:38.961303 kubelet[3220]: E0805 22:24:38.961179 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.758708 kubelet[3220]: E0805 22:24:39.758313 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:39.826838 containerd[1681]: time="2024-08-05T22:24:39.826787709Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:39.830863 containerd[1681]: time="2024-08-05T22:24:39.830727054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0: active requests=0, bytes read=5140568" Aug 5 22:24:39.836656 containerd[1681]: time="2024-08-05T22:24:39.836598820Z" level=info msg="ImageCreate event name:\"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:39.841511 containerd[1681]: time="2024-08-05T22:24:39.840709067Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:39.841511 containerd[1681]: time="2024-08-05T22:24:39.841290374Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" with image id \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:e57c9db86f1cee1ae6f41257eed1ee2f363783177809217a2045502a09cf7cee\", size \"6588288\" in 1.464392993s" Aug 5 22:24:39.841511 containerd[1681]: time="2024-08-05T22:24:39.841328074Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.28.0\" returns image reference \"sha256:587b28ecfc62e2a60919e6a39f9b25be37c77da99d8c84252716fa3a49a171b9\"" Aug 5 22:24:39.845206 containerd[1681]: time="2024-08-05T22:24:39.845169418Z" level=info msg="CreateContainer within sandbox \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 5 22:24:39.850405 kubelet[3220]: I0805 22:24:39.850379 3220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:24:39.884931 containerd[1681]: time="2024-08-05T22:24:39.884897371Z" level=info msg="CreateContainer within sandbox \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2\"" Aug 5 22:24:39.887172 containerd[1681]: time="2024-08-05T22:24:39.885405177Z" level=info msg="StartContainer for \"0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2\"" Aug 5 22:24:39.923272 kubelet[3220]: E0805 22:24:39.921279 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.923272 kubelet[3220]: W0805 22:24:39.921322 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.923272 kubelet[3220]: E0805 22:24:39.921354 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.924219 kubelet[3220]: E0805 22:24:39.923322 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.924219 kubelet[3220]: W0805 22:24:39.923339 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.924219 kubelet[3220]: E0805 22:24:39.923383 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.924219 kubelet[3220]: E0805 22:24:39.923732 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.924219 kubelet[3220]: W0805 22:24:39.923747 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.924219 kubelet[3220]: E0805 22:24:39.923772 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.926347 kubelet[3220]: E0805 22:24:39.924566 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.926347 kubelet[3220]: W0805 22:24:39.924582 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.926347 kubelet[3220]: E0805 22:24:39.925483 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.926347 kubelet[3220]: E0805 22:24:39.925780 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.926347 kubelet[3220]: W0805 22:24:39.925792 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.926347 kubelet[3220]: E0805 22:24:39.925812 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.927856 kubelet[3220]: E0805 22:24:39.927840 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.928080 kubelet[3220]: W0805 22:24:39.927959 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.928080 kubelet[3220]: E0805 22:24:39.927982 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.928363 kubelet[3220]: E0805 22:24:39.928350 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.928579 kubelet[3220]: W0805 22:24:39.928437 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.928579 kubelet[3220]: E0805 22:24:39.928482 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.928785 kubelet[3220]: E0805 22:24:39.928773 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.928859 kubelet[3220]: W0805 22:24:39.928849 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.929055 kubelet[3220]: E0805 22:24:39.928929 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.929203 kubelet[3220]: E0805 22:24:39.929192 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.929378 kubelet[3220]: W0805 22:24:39.929269 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.929378 kubelet[3220]: E0805 22:24:39.929290 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.929669 kubelet[3220]: E0805 22:24:39.929655 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.929822 kubelet[3220]: W0805 22:24:39.929764 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.929822 kubelet[3220]: E0805 22:24:39.929787 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.930318 kubelet[3220]: E0805 22:24:39.930219 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.930318 kubelet[3220]: W0805 22:24:39.930232 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.930318 kubelet[3220]: E0805 22:24:39.930266 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.931291 kubelet[3220]: E0805 22:24:39.930646 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.931291 kubelet[3220]: W0805 22:24:39.930659 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.931291 kubelet[3220]: E0805 22:24:39.930675 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.932095 kubelet[3220]: E0805 22:24:39.931997 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.932352 kubelet[3220]: W0805 22:24:39.932285 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.932352 kubelet[3220]: E0805 22:24:39.932310 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.933809 kubelet[3220]: E0805 22:24:39.933768 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.934084 kubelet[3220]: W0805 22:24:39.933976 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.934084 kubelet[3220]: E0805 22:24:39.934004 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.935148 kubelet[3220]: E0805 22:24:39.934984 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.935148 kubelet[3220]: W0805 22:24:39.934998 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.935148 kubelet[3220]: E0805 22:24:39.935015 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.943640 systemd[1]: Started cri-containerd-0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2.scope - libcontainer container 0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2. Aug 5 22:24:39.948199 kubelet[3220]: E0805 22:24:39.947859 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.948199 kubelet[3220]: W0805 22:24:39.947883 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.948199 kubelet[3220]: E0805 22:24:39.947901 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.948650 kubelet[3220]: E0805 22:24:39.948290 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.948650 kubelet[3220]: W0805 22:24:39.948303 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.948650 kubelet[3220]: E0805 22:24:39.948321 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.949214 kubelet[3220]: E0805 22:24:39.948997 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.949214 kubelet[3220]: W0805 22:24:39.949011 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.949214 kubelet[3220]: E0805 22:24:39.949040 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.949628 kubelet[3220]: E0805 22:24:39.949522 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.949628 kubelet[3220]: W0805 22:24:39.949540 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.949855 kubelet[3220]: E0805 22:24:39.949753 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.950292 kubelet[3220]: E0805 22:24:39.950230 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.950292 kubelet[3220]: W0805 22:24:39.950244 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.950660 kubelet[3220]: E0805 22:24:39.950432 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.951617 kubelet[3220]: E0805 22:24:39.951603 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.952098 kubelet[3220]: W0805 22:24:39.952084 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.952249 kubelet[3220]: E0805 22:24:39.952234 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.952597 kubelet[3220]: E0805 22:24:39.952496 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.952597 kubelet[3220]: W0805 22:24:39.952516 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.952825 kubelet[3220]: E0805 22:24:39.952716 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.952825 kubelet[3220]: W0805 22:24:39.952729 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.952825 kubelet[3220]: E0805 22:24:39.952739 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.952825 kubelet[3220]: E0805 22:24:39.952768 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.953819 kubelet[3220]: E0805 22:24:39.953799 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.953819 kubelet[3220]: W0805 22:24:39.953818 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.953970 kubelet[3220]: E0805 22:24:39.953840 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.954079 kubelet[3220]: E0805 22:24:39.954061 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.954079 kubelet[3220]: W0805 22:24:39.954079 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.954312 kubelet[3220]: E0805 22:24:39.954195 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.954840 kubelet[3220]: E0805 22:24:39.954819 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.954840 kubelet[3220]: W0805 22:24:39.954838 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.955017 kubelet[3220]: E0805 22:24:39.954999 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.955627 kubelet[3220]: E0805 22:24:39.955606 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.955627 kubelet[3220]: W0805 22:24:39.955626 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.955770 kubelet[3220]: E0805 22:24:39.955718 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.956807 kubelet[3220]: E0805 22:24:39.956791 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.956807 kubelet[3220]: W0805 22:24:39.956807 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.957366 kubelet[3220]: E0805 22:24:39.957345 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.957539 kubelet[3220]: E0805 22:24:39.957520 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.957539 kubelet[3220]: W0805 22:24:39.957536 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.957793 kubelet[3220]: E0805 22:24:39.957710 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.957870 kubelet[3220]: E0805 22:24:39.957833 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.957870 kubelet[3220]: W0805 22:24:39.957843 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.957870 kubelet[3220]: E0805 22:24:39.957864 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.958090 kubelet[3220]: E0805 22:24:39.958074 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.958090 kubelet[3220]: W0805 22:24:39.958084 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.958179 kubelet[3220]: E0805 22:24:39.958115 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.959137 kubelet[3220]: E0805 22:24:39.959115 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.959137 kubelet[3220]: W0805 22:24:39.959136 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.959275 kubelet[3220]: E0805 22:24:39.959163 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.960718 kubelet[3220]: E0805 22:24:39.960702 3220 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 5 22:24:39.960718 kubelet[3220]: W0805 22:24:39.960717 3220 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 5 22:24:39.960892 kubelet[3220]: E0805 22:24:39.960734 3220 plugins.go:723] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 5 22:24:39.988680 containerd[1681]: time="2024-08-05T22:24:39.988604553Z" level=info msg="StartContainer for \"0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2\" returns successfully" Aug 5 22:24:40.007946 systemd[1]: cri-containerd-0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2.scope: Deactivated successfully. Aug 5 22:24:40.044786 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2-rootfs.mount: Deactivated successfully. Aug 5 22:24:41.335316 containerd[1681]: time="2024-08-05T22:24:41.335243503Z" level=info msg="shim disconnected" id=0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2 namespace=k8s.io Aug 5 22:24:41.335316 containerd[1681]: time="2024-08-05T22:24:41.335309404Z" level=warning msg="cleaning up after shim disconnected" id=0896d885418400a5c429420aa6cc023eed032cbe3644a49a70ae270fec2d9bb2 namespace=k8s.io Aug 5 22:24:41.335316 containerd[1681]: time="2024-08-05T22:24:41.335321004Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:24:41.758518 kubelet[3220]: E0805 22:24:41.758397 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:41.858507 containerd[1681]: time="2024-08-05T22:24:41.857036051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\"" Aug 5 22:24:43.758849 kubelet[3220]: E0805 22:24:43.758273 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:45.758224 kubelet[3220]: E0805 22:24:45.758180 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:47.026186 containerd[1681]: time="2024-08-05T22:24:47.026136948Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:47.028342 containerd[1681]: time="2024-08-05T22:24:47.028190471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.28.0: active requests=0, bytes read=93087850" Aug 5 22:24:47.031152 containerd[1681]: time="2024-08-05T22:24:47.030289295Z" level=info msg="ImageCreate event name:\"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:47.034812 containerd[1681]: time="2024-08-05T22:24:47.034774646Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:47.035948 containerd[1681]: time="2024-08-05T22:24:47.035914359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.28.0\" with image id \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\", repo tag \"ghcr.io/flatcar/calico/cni:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:67fdc0954d3c96f9a7938fca4d5759c835b773dfb5cb513903e89d21462d886e\", size \"94535610\" in 5.177540793s" Aug 5 22:24:47.036092 containerd[1681]: time="2024-08-05T22:24:47.036067361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.28.0\" returns image reference \"sha256:107014d9f4c891a0235fa80b55df22451e8804ede5b891b632c5779ca3ab07a7\"" Aug 5 22:24:47.038167 containerd[1681]: time="2024-08-05T22:24:47.038128884Z" level=info msg="CreateContainer within sandbox \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 5 22:24:47.080141 containerd[1681]: time="2024-08-05T22:24:47.080097362Z" level=info msg="CreateContainer within sandbox \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0\"" Aug 5 22:24:47.080761 containerd[1681]: time="2024-08-05T22:24:47.080667969Z" level=info msg="StartContainer for \"fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0\"" Aug 5 22:24:47.123614 systemd[1]: Started cri-containerd-fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0.scope - libcontainer container fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0. Aug 5 22:24:47.156184 containerd[1681]: time="2024-08-05T22:24:47.156129128Z" level=info msg="StartContainer for \"fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0\" returns successfully" Aug 5 22:24:47.759480 kubelet[3220]: E0805 22:24:47.758502 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:48.498331 containerd[1681]: time="2024-08-05T22:24:48.498267409Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 5 22:24:48.500581 systemd[1]: cri-containerd-fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0.scope: Deactivated successfully. Aug 5 22:24:48.522791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0-rootfs.mount: Deactivated successfully. Aug 5 22:24:48.564297 kubelet[3220]: I0805 22:24:48.564266 3220 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Aug 5 22:24:49.016542 kubelet[3220]: I0805 22:24:48.583337 3220 topology_manager.go:215] "Topology Admit Handler" podUID="bd5c4120-5335-4525-afd6-e738b7da563e" podNamespace="kube-system" podName="coredns-5dd5756b68-fnwpf" Aug 5 22:24:49.016542 kubelet[3220]: I0805 22:24:48.587473 3220 topology_manager.go:215] "Topology Admit Handler" podUID="0b91ce80-bd7b-476f-b330-517e59d21ca8" podNamespace="kube-system" podName="coredns-5dd5756b68-vdt5j" Aug 5 22:24:49.016542 kubelet[3220]: I0805 22:24:48.590280 3220 topology_manager.go:215] "Topology Admit Handler" podUID="d86c2908-ed3c-4609-9fcf-a967e5843ec5" podNamespace="calico-system" podName="calico-kube-controllers-c49f8cb95-5cpsf" Aug 5 22:24:49.016542 kubelet[3220]: I0805 22:24:48.614005 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phhv4\" (UniqueName: \"kubernetes.io/projected/bd5c4120-5335-4525-afd6-e738b7da563e-kube-api-access-phhv4\") pod \"coredns-5dd5756b68-fnwpf\" (UID: \"bd5c4120-5335-4525-afd6-e738b7da563e\") " pod="kube-system/coredns-5dd5756b68-fnwpf" Aug 5 22:24:49.016542 kubelet[3220]: I0805 22:24:48.614218 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mlcqm\" (UniqueName: \"kubernetes.io/projected/d86c2908-ed3c-4609-9fcf-a967e5843ec5-kube-api-access-mlcqm\") pod \"calico-kube-controllers-c49f8cb95-5cpsf\" (UID: \"d86c2908-ed3c-4609-9fcf-a967e5843ec5\") " pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" Aug 5 22:24:49.016542 kubelet[3220]: I0805 22:24:48.614260 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d86c2908-ed3c-4609-9fcf-a967e5843ec5-tigera-ca-bundle\") pod \"calico-kube-controllers-c49f8cb95-5cpsf\" (UID: \"d86c2908-ed3c-4609-9fcf-a967e5843ec5\") " pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" Aug 5 22:24:48.599917 systemd[1]: Created slice kubepods-burstable-podbd5c4120_5335_4525_afd6_e738b7da563e.slice - libcontainer container kubepods-burstable-podbd5c4120_5335_4525_afd6_e738b7da563e.slice. Aug 5 22:24:49.017491 kubelet[3220]: I0805 22:24:48.614292 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bd5c4120-5335-4525-afd6-e738b7da563e-config-volume\") pod \"coredns-5dd5756b68-fnwpf\" (UID: \"bd5c4120-5335-4525-afd6-e738b7da563e\") " pod="kube-system/coredns-5dd5756b68-fnwpf" Aug 5 22:24:49.017491 kubelet[3220]: I0805 22:24:48.614327 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kb7zf\" (UniqueName: \"kubernetes.io/projected/0b91ce80-bd7b-476f-b330-517e59d21ca8-kube-api-access-kb7zf\") pod \"coredns-5dd5756b68-vdt5j\" (UID: \"0b91ce80-bd7b-476f-b330-517e59d21ca8\") " pod="kube-system/coredns-5dd5756b68-vdt5j" Aug 5 22:24:49.017491 kubelet[3220]: I0805 22:24:48.614365 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0b91ce80-bd7b-476f-b330-517e59d21ca8-config-volume\") pod \"coredns-5dd5756b68-vdt5j\" (UID: \"0b91ce80-bd7b-476f-b330-517e59d21ca8\") " pod="kube-system/coredns-5dd5756b68-vdt5j" Aug 5 22:24:48.610676 systemd[1]: Created slice kubepods-burstable-pod0b91ce80_bd7b_476f_b330_517e59d21ca8.slice - libcontainer container kubepods-burstable-pod0b91ce80_bd7b_476f_b330_517e59d21ca8.slice. Aug 5 22:24:48.620205 systemd[1]: Created slice kubepods-besteffort-podd86c2908_ed3c_4609_9fcf_a967e5843ec5.slice - libcontainer container kubepods-besteffort-podd86c2908_ed3c_4609_9fcf_a967e5843ec5.slice. Aug 5 22:24:49.323860 containerd[1681]: time="2024-08-05T22:24:49.323419003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnwpf,Uid:bd5c4120-5335-4525-afd6-e738b7da563e,Namespace:kube-system,Attempt:0,}" Aug 5 22:24:49.329730 containerd[1681]: time="2024-08-05T22:24:49.329660174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c49f8cb95-5cpsf,Uid:d86c2908-ed3c-4609-9fcf-a967e5843ec5,Namespace:calico-system,Attempt:0,}" Aug 5 22:24:49.330059 containerd[1681]: time="2024-08-05T22:24:49.329848876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vdt5j,Uid:0b91ce80-bd7b-476f-b330-517e59d21ca8,Namespace:kube-system,Attempt:0,}" Aug 5 22:24:49.763990 systemd[1]: Created slice kubepods-besteffort-pod5d7f5978_577b_47bc_9e09_7fc8851b40e1.slice - libcontainer container kubepods-besteffort-pod5d7f5978_577b_47bc_9e09_7fc8851b40e1.slice. Aug 5 22:24:49.766503 containerd[1681]: time="2024-08-05T22:24:49.766444447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg5zn,Uid:5d7f5978-577b-47bc-9e09-7fc8851b40e1,Namespace:calico-system,Attempt:0,}" Aug 5 22:24:50.181315 containerd[1681]: time="2024-08-05T22:24:50.181154969Z" level=info msg="shim disconnected" id=fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0 namespace=k8s.io Aug 5 22:24:50.181315 containerd[1681]: time="2024-08-05T22:24:50.181218769Z" level=warning msg="cleaning up after shim disconnected" id=fb4e980361f626160b04dda0aab3aebf401e332e6ba0cb5c8700696d03c01ca0 namespace=k8s.io Aug 5 22:24:50.181315 containerd[1681]: time="2024-08-05T22:24:50.181233770Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 5 22:24:50.415569 containerd[1681]: time="2024-08-05T22:24:50.414794729Z" level=error msg="Failed to destroy network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.415799 containerd[1681]: time="2024-08-05T22:24:50.415666039Z" level=error msg="encountered an error cleaning up failed sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.415969 containerd[1681]: time="2024-08-05T22:24:50.415841441Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vdt5j,Uid:0b91ce80-bd7b-476f-b330-517e59d21ca8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.416476 kubelet[3220]: E0805 22:24:50.416238 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.416476 kubelet[3220]: E0805 22:24:50.416320 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-vdt5j" Aug 5 22:24:50.416476 kubelet[3220]: E0805 22:24:50.416350 3220 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-vdt5j" Aug 5 22:24:50.417200 kubelet[3220]: E0805 22:24:50.416429 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-vdt5j_kube-system(0b91ce80-bd7b-476f-b330-517e59d21ca8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-vdt5j_kube-system(0b91ce80-bd7b-476f-b330-517e59d21ca8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-vdt5j" podUID="0b91ce80-bd7b-476f-b330-517e59d21ca8" Aug 5 22:24:50.429536 containerd[1681]: time="2024-08-05T22:24:50.429484796Z" level=error msg="Failed to destroy network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.430583 containerd[1681]: time="2024-08-05T22:24:50.430242905Z" level=error msg="encountered an error cleaning up failed sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.430583 containerd[1681]: time="2024-08-05T22:24:50.430409807Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg5zn,Uid:5d7f5978-577b-47bc-9e09-7fc8851b40e1,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.430952 kubelet[3220]: E0805 22:24:50.430930 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.431049 kubelet[3220]: E0805 22:24:50.430991 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:50.431049 kubelet[3220]: E0805 22:24:50.431033 3220 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bg5zn" Aug 5 22:24:50.431556 kubelet[3220]: E0805 22:24:50.431378 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bg5zn_calico-system(5d7f5978-577b-47bc-9e09-7fc8851b40e1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bg5zn_calico-system(5d7f5978-577b-47bc-9e09-7fc8851b40e1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:50.435900 containerd[1681]: time="2024-08-05T22:24:50.435866569Z" level=error msg="Failed to destroy network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.436876 containerd[1681]: time="2024-08-05T22:24:50.436841980Z" level=error msg="encountered an error cleaning up failed sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.437033 containerd[1681]: time="2024-08-05T22:24:50.437004182Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c49f8cb95-5cpsf,Uid:d86c2908-ed3c-4609-9fcf-a967e5843ec5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.437639 kubelet[3220]: E0805 22:24:50.437340 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.437639 kubelet[3220]: E0805 22:24:50.437408 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" Aug 5 22:24:50.437639 kubelet[3220]: E0805 22:24:50.437441 3220 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" Aug 5 22:24:50.437825 kubelet[3220]: E0805 22:24:50.437552 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-c49f8cb95-5cpsf_calico-system(d86c2908-ed3c-4609-9fcf-a967e5843ec5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-c49f8cb95-5cpsf_calico-system(d86c2908-ed3c-4609-9fcf-a967e5843ec5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" podUID="d86c2908-ed3c-4609-9fcf-a967e5843ec5" Aug 5 22:24:50.440371 containerd[1681]: time="2024-08-05T22:24:50.440328519Z" level=error msg="Failed to destroy network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.440738 containerd[1681]: time="2024-08-05T22:24:50.440685523Z" level=error msg="encountered an error cleaning up failed sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.440848 containerd[1681]: time="2024-08-05T22:24:50.440748524Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnwpf,Uid:bd5c4120-5335-4525-afd6-e738b7da563e,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.440971 kubelet[3220]: E0805 22:24:50.440927 3220 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.441041 kubelet[3220]: E0805 22:24:50.440972 3220 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fnwpf" Aug 5 22:24:50.441041 kubelet[3220]: E0805 22:24:50.441000 3220 kuberuntime_manager.go:1171] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-5dd5756b68-fnwpf" Aug 5 22:24:50.441312 kubelet[3220]: E0805 22:24:50.441085 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-fnwpf_kube-system(bd5c4120-5335-4525-afd6-e738b7da563e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-fnwpf_kube-system(bd5c4120-5335-4525-afd6-e738b7da563e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fnwpf" podUID="bd5c4120-5335-4525-afd6-e738b7da563e" Aug 5 22:24:50.886934 kubelet[3220]: I0805 22:24:50.886898 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:24:50.887738 containerd[1681]: time="2024-08-05T22:24:50.887665612Z" level=info msg="StopPodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\"" Aug 5 22:24:50.888317 containerd[1681]: time="2024-08-05T22:24:50.887964216Z" level=info msg="Ensure that sandbox d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925 in task-service has been cleanup successfully" Aug 5 22:24:50.890339 containerd[1681]: time="2024-08-05T22:24:50.890302443Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\"" Aug 5 22:24:50.894469 kubelet[3220]: I0805 22:24:50.893308 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:24:50.895095 containerd[1681]: time="2024-08-05T22:24:50.895018096Z" level=info msg="StopPodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\"" Aug 5 22:24:50.895300 containerd[1681]: time="2024-08-05T22:24:50.895269199Z" level=info msg="Ensure that sandbox 502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008 in task-service has been cleanup successfully" Aug 5 22:24:50.896652 kubelet[3220]: I0805 22:24:50.896631 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:24:50.898800 containerd[1681]: time="2024-08-05T22:24:50.898764839Z" level=info msg="StopPodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\"" Aug 5 22:24:50.898983 containerd[1681]: time="2024-08-05T22:24:50.898955541Z" level=info msg="Ensure that sandbox 31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37 in task-service has been cleanup successfully" Aug 5 22:24:50.905479 kubelet[3220]: I0805 22:24:50.905182 3220 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:24:50.907826 containerd[1681]: time="2024-08-05T22:24:50.907799442Z" level=info msg="StopPodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\"" Aug 5 22:24:50.909762 containerd[1681]: time="2024-08-05T22:24:50.909315359Z" level=info msg="Ensure that sandbox eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1 in task-service has been cleanup successfully" Aug 5 22:24:50.984922 containerd[1681]: time="2024-08-05T22:24:50.984701817Z" level=error msg="StopPodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" failed" error="failed to destroy network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.985911 containerd[1681]: time="2024-08-05T22:24:50.985377225Z" level=error msg="StopPodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" failed" error="failed to destroy network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.986021 kubelet[3220]: E0805 22:24:50.985465 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:24:50.986021 kubelet[3220]: E0805 22:24:50.985552 3220 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925"} Aug 5 22:24:50.986021 kubelet[3220]: E0805 22:24:50.985645 3220 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"0b91ce80-bd7b-476f-b330-517e59d21ca8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:50.986021 kubelet[3220]: E0805 22:24:50.985697 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"0b91ce80-bd7b-476f-b330-517e59d21ca8\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-vdt5j" podUID="0b91ce80-bd7b-476f-b330-517e59d21ca8" Aug 5 22:24:50.986285 kubelet[3220]: E0805 22:24:50.985794 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:24:50.986285 kubelet[3220]: E0805 22:24:50.985816 3220 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37"} Aug 5 22:24:50.986285 kubelet[3220]: E0805 22:24:50.985854 3220 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"bd5c4120-5335-4525-afd6-e738b7da563e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:50.986285 kubelet[3220]: E0805 22:24:50.985885 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"bd5c4120-5335-4525-afd6-e738b7da563e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-5dd5756b68-fnwpf" podUID="bd5c4120-5335-4525-afd6-e738b7da563e" Aug 5 22:24:50.989130 containerd[1681]: time="2024-08-05T22:24:50.989093967Z" level=error msg="StopPodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" failed" error="failed to destroy network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.989434 kubelet[3220]: E0805 22:24:50.989269 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:24:50.989434 kubelet[3220]: E0805 22:24:50.989295 3220 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008"} Aug 5 22:24:50.989434 kubelet[3220]: E0805 22:24:50.989326 3220 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d86c2908-ed3c-4609-9fcf-a967e5843ec5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:50.989434 kubelet[3220]: E0805 22:24:50.989348 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d86c2908-ed3c-4609-9fcf-a967e5843ec5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" podUID="d86c2908-ed3c-4609-9fcf-a967e5843ec5" Aug 5 22:24:50.989798 containerd[1681]: time="2024-08-05T22:24:50.989762275Z" level=error msg="StopPodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" failed" error="failed to destroy network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 5 22:24:50.989942 kubelet[3220]: E0805 22:24:50.989924 3220 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:24:50.990010 kubelet[3220]: E0805 22:24:50.989952 3220 kuberuntime_manager.go:1380] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1"} Aug 5 22:24:50.990010 kubelet[3220]: E0805 22:24:50.989997 3220 kuberuntime_manager.go:1080] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Aug 5 22:24:50.990104 kubelet[3220]: E0805 22:24:50.990031 3220 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d7f5978-577b-47bc-9e09-7fc8851b40e1\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bg5zn" podUID="5d7f5978-577b-47bc-9e09-7fc8851b40e1" Aug 5 22:24:51.259144 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925-shm.mount: Deactivated successfully. Aug 5 22:24:51.259296 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37-shm.mount: Deactivated successfully. Aug 5 22:24:51.259397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1-shm.mount: Deactivated successfully. Aug 5 22:24:51.260173 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008-shm.mount: Deactivated successfully. Aug 5 22:24:54.340809 kubelet[3220]: I0805 22:24:54.339916 3220 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 5 22:24:56.088938 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110581296.mount: Deactivated successfully. Aug 5 22:24:56.139003 containerd[1681]: time="2024-08-05T22:24:56.138944974Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:56.141192 containerd[1681]: time="2024-08-05T22:24:56.141146399Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.28.0: active requests=0, bytes read=115238750" Aug 5 22:24:56.144505 containerd[1681]: time="2024-08-05T22:24:56.143818530Z" level=info msg="ImageCreate event name:\"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:56.149068 containerd[1681]: time="2024-08-05T22:24:56.148983188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:24:56.150076 containerd[1681]: time="2024-08-05T22:24:56.149682196Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.28.0\" with image id \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\", repo tag \"ghcr.io/flatcar/calico/node:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:95f8004836427050c9997ad0800819ced5636f6bda647b4158fc7c497910c8d0\", size \"115238612\" in 5.259336553s" Aug 5 22:24:56.150076 containerd[1681]: time="2024-08-05T22:24:56.149724197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.28.0\" returns image reference \"sha256:4e42b6f329bc1d197d97f6d2a1289b9e9f4a9560db3a36c8cffb5e95e64e4b49\"" Aug 5 22:24:56.167810 containerd[1681]: time="2024-08-05T22:24:56.167563000Z" level=info msg="CreateContainer within sandbox \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 5 22:24:56.212661 containerd[1681]: time="2024-08-05T22:24:56.212613012Z" level=info msg="CreateContainer within sandbox \"44597bed51268437d4cfb111b4bbf3c8a9d041243accbf02b1272150daa01c8c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300\"" Aug 5 22:24:56.213389 containerd[1681]: time="2024-08-05T22:24:56.213350821Z" level=info msg="StartContainer for \"4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300\"" Aug 5 22:24:56.250635 systemd[1]: Started cri-containerd-4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300.scope - libcontainer container 4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300. Aug 5 22:24:56.283664 containerd[1681]: time="2024-08-05T22:24:56.283476019Z" level=info msg="StartContainer for \"4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300\" returns successfully" Aug 5 22:24:56.398897 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 5 22:24:56.399054 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 5 22:24:58.195628 systemd-networkd[1547]: vxlan.calico: Link UP Aug 5 22:24:58.195642 systemd-networkd[1547]: vxlan.calico: Gained carrier Aug 5 22:24:59.307621 systemd-networkd[1547]: vxlan.calico: Gained IPv6LL Aug 5 22:25:02.761353 containerd[1681]: time="2024-08-05T22:25:02.760241601Z" level=info msg="StopPodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\"" Aug 5 22:25:02.761353 containerd[1681]: time="2024-08-05T22:25:02.761095611Z" level=info msg="StopPodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\"" Aug 5 22:25:02.763241 containerd[1681]: time="2024-08-05T22:25:02.763208235Z" level=info msg="StopPodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\"" Aug 5 22:25:02.763604 containerd[1681]: time="2024-08-05T22:25:02.763555239Z" level=info msg="StopPodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\"" Aug 5 22:25:02.857402 kubelet[3220]: I0805 22:25:02.856997 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-2kjtm" podStartSLOduration=7.971468181 podCreationTimestamp="2024-08-05 22:24:35 +0000 UTC" firstStartedPulling="2024-08-05 22:24:36.264865384 +0000 UTC m=+19.585996415" lastFinishedPulling="2024-08-05 22:24:56.150324504 +0000 UTC m=+39.471455435" observedRunningTime="2024-08-05 22:24:56.962382242 +0000 UTC m=+40.283513273" watchObservedRunningTime="2024-08-05 22:25:02.856927201 +0000 UTC m=+46.178058232" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.856 [INFO][4546] k8s.go 608: Cleaning up netns ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.856 [INFO][4546] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" iface="eth0" netns="/var/run/netns/cni-ba918a86-1ca7-a444-a6e8-78bb86763490" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.857 [INFO][4546] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" iface="eth0" netns="/var/run/netns/cni-ba918a86-1ca7-a444-a6e8-78bb86763490" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.859 [INFO][4546] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" iface="eth0" netns="/var/run/netns/cni-ba918a86-1ca7-a444-a6e8-78bb86763490" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.859 [INFO][4546] k8s.go 615: Releasing IP address(es) ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.859 [INFO][4546] utils.go 188: Calico CNI releasing IP address ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.909 [INFO][4567] ipam_plugin.go 411: Releasing address using handleID ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.916 [INFO][4567] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.916 [INFO][4567] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.934 [WARNING][4567] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.934 [INFO][4567] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.936 [INFO][4567] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:02.939545 containerd[1681]: 2024-08-05 22:25:02.938 [INFO][4546] k8s.go 621: Teardown processing complete. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:02.943724 containerd[1681]: time="2024-08-05T22:25:02.943378585Z" level=info msg="TearDown network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" successfully" Aug 5 22:25:02.943724 containerd[1681]: time="2024-08-05T22:25:02.943664888Z" level=info msg="StopPodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" returns successfully" Aug 5 22:25:02.945388 containerd[1681]: time="2024-08-05T22:25:02.945343407Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vdt5j,Uid:0b91ce80-bd7b-476f-b330-517e59d21ca8,Namespace:kube-system,Attempt:1,}" Aug 5 22:25:02.949077 systemd[1]: run-netns-cni\x2dba918a86\x2d1ca7\x2da444\x2da6e8\x2d78bb86763490.mount: Deactivated successfully. Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.911 [INFO][4542] k8s.go 608: Cleaning up netns ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.911 [INFO][4542] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" iface="eth0" netns="/var/run/netns/cni-37bd1ac8-5637-8d6f-5a42-22526ddb449c" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.917 [INFO][4542] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" iface="eth0" netns="/var/run/netns/cni-37bd1ac8-5637-8d6f-5a42-22526ddb449c" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.917 [INFO][4542] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" iface="eth0" netns="/var/run/netns/cni-37bd1ac8-5637-8d6f-5a42-22526ddb449c" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.917 [INFO][4542] k8s.go 615: Releasing IP address(es) ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.917 [INFO][4542] utils.go 188: Calico CNI releasing IP address ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.973 [INFO][4580] ipam_plugin.go 411: Releasing address using handleID ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.973 [INFO][4580] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.973 [INFO][4580] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.981 [WARNING][4580] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.981 [INFO][4580] ipam_plugin.go 439: Releasing address using workloadID ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.983 [INFO][4580] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:02.992145 containerd[1681]: 2024-08-05 22:25:02.989 [INFO][4542] k8s.go 621: Teardown processing complete. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:02.995319 containerd[1681]: time="2024-08-05T22:25:02.995185074Z" level=info msg="TearDown network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" successfully" Aug 5 22:25:02.995319 containerd[1681]: time="2024-08-05T22:25:02.995219675Z" level=info msg="StopPodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" returns successfully" Aug 5 22:25:02.996907 systemd[1]: run-netns-cni\x2d37bd1ac8\x2d5637\x2d8d6f\x2d5a42\x2d22526ddb449c.mount: Deactivated successfully. Aug 5 22:25:02.998346 containerd[1681]: time="2024-08-05T22:25:02.998312210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg5zn,Uid:5d7f5978-577b-47bc-9e09-7fc8851b40e1,Namespace:calico-system,Attempt:1,}" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.887 [INFO][4541] k8s.go 608: Cleaning up netns ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.887 [INFO][4541] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" iface="eth0" netns="/var/run/netns/cni-18d5200d-8c0a-7eb6-8e88-3047e925021c" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.888 [INFO][4541] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" iface="eth0" netns="/var/run/netns/cni-18d5200d-8c0a-7eb6-8e88-3047e925021c" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.888 [INFO][4541] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" iface="eth0" netns="/var/run/netns/cni-18d5200d-8c0a-7eb6-8e88-3047e925021c" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.888 [INFO][4541] k8s.go 615: Releasing IP address(es) ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.888 [INFO][4541] utils.go 188: Calico CNI releasing IP address ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.978 [INFO][4573] ipam_plugin.go 411: Releasing address using handleID ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.979 [INFO][4573] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.983 [INFO][4573] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.998 [WARNING][4573] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:02.999 [INFO][4573] ipam_plugin.go 439: Releasing address using workloadID ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:03.001 [INFO][4573] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.005945 containerd[1681]: 2024-08-05 22:25:03.004 [INFO][4541] k8s.go 621: Teardown processing complete. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:03.007211 containerd[1681]: time="2024-08-05T22:25:03.006535503Z" level=info msg="TearDown network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" successfully" Aug 5 22:25:03.007211 containerd[1681]: time="2024-08-05T22:25:03.006574304Z" level=info msg="StopPodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" returns successfully" Aug 5 22:25:03.008507 containerd[1681]: time="2024-08-05T22:25:03.008273723Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c49f8cb95-5cpsf,Uid:d86c2908-ed3c-4609-9fcf-a967e5843ec5,Namespace:calico-system,Attempt:1,}" Aug 5 22:25:03.022348 systemd[1]: run-netns-cni\x2d18d5200d\x2d8c0a\x2d7eb6\x2d8e88\x2d3047e925021c.mount: Deactivated successfully. Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:02.924 [INFO][4547] k8s.go 608: Cleaning up netns ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:02.925 [INFO][4547] dataplane_linux.go 530: Deleting workload's device in netns. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" iface="eth0" netns="/var/run/netns/cni-62063289-c525-53d6-f005-3f9640154058" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:02.926 [INFO][4547] dataplane_linux.go 541: Entered netns, deleting veth. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" iface="eth0" netns="/var/run/netns/cni-62063289-c525-53d6-f005-3f9640154058" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:02.927 [INFO][4547] dataplane_linux.go 568: Workload's veth was already gone. Nothing to do. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" iface="eth0" netns="/var/run/netns/cni-62063289-c525-53d6-f005-3f9640154058" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:02.928 [INFO][4547] k8s.go 615: Releasing IP address(es) ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:02.928 [INFO][4547] utils.go 188: Calico CNI releasing IP address ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.011 [INFO][4582] ipam_plugin.go 411: Releasing address using handleID ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.011 [INFO][4582] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.011 [INFO][4582] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.025 [WARNING][4582] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.025 [INFO][4582] ipam_plugin.go 439: Releasing address using workloadID ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.028 [INFO][4582] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.032479 containerd[1681]: 2024-08-05 22:25:03.031 [INFO][4547] k8s.go 621: Teardown processing complete. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:03.037829 containerd[1681]: time="2024-08-05T22:25:03.037793359Z" level=info msg="TearDown network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" successfully" Aug 5 22:25:03.037946 containerd[1681]: time="2024-08-05T22:25:03.037928760Z" level=info msg="StopPodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" returns successfully" Aug 5 22:25:03.039265 systemd[1]: run-netns-cni\x2d62063289\x2dc525\x2d53d6\x2df005\x2d3f9640154058.mount: Deactivated successfully. Aug 5 22:25:03.039639 containerd[1681]: time="2024-08-05T22:25:03.039609380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnwpf,Uid:bd5c4120-5335-4525-afd6-e738b7da563e,Namespace:kube-system,Attempt:1,}" Aug 5 22:25:03.287637 systemd-networkd[1547]: calibfe158be2c2: Link UP Aug 5 22:25:03.290164 systemd-networkd[1547]: calibfe158be2c2: Gained carrier Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.071 [INFO][4595] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0 coredns-5dd5756b68- kube-system 0b91ce80-bd7b-476f-b330-517e59d21ca8 685 0 2024-08-05 22:24:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.1.0-a-bfd2eb4520 coredns-5dd5756b68-vdt5j eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calibfe158be2c2 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.071 [INFO][4595] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.113 [INFO][4616] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" HandleID="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.141 [INFO][4616] ipam_plugin.go 264: Auto assigning IP ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" HandleID="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002efec0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.1.0-a-bfd2eb4520", "pod":"coredns-5dd5756b68-vdt5j", "timestamp":"2024-08-05 22:25:03.113937025 +0000 UTC"}, Hostname:"ci-4012.1.0-a-bfd2eb4520", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.141 [INFO][4616] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.141 [INFO][4616] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.141 [INFO][4616] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-bfd2eb4520' Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.152 [INFO][4616] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.177 [INFO][4616] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.200 [INFO][4616] ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.207 [INFO][4616] ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.212 [INFO][4616] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.214 [INFO][4616] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.216 [INFO][4616] ipam.go 1685: Creating new handle: k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.239 [INFO][4616] ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.264 [INFO][4616] ipam.go 1216: Successfully claimed IPs: [192.168.112.129/26] block=192.168.112.128/26 handle="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.265 [INFO][4616] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.129/26] handle="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.265 [INFO][4616] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.333600 containerd[1681]: 2024-08-05 22:25:03.267 [INFO][4616] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.112.129/26] IPv6=[] ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" HandleID="k8s-pod-network.f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.335103 containerd[1681]: 2024-08-05 22:25:03.273 [INFO][4595] k8s.go 386: Populated endpoint ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0b91ce80-bd7b-476f-b330-517e59d21ca8", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"", Pod:"coredns-5dd5756b68-vdt5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe158be2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.335103 containerd[1681]: 2024-08-05 22:25:03.273 [INFO][4595] k8s.go 387: Calico CNI using IPs: [192.168.112.129/32] ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.335103 containerd[1681]: 2024-08-05 22:25:03.274 [INFO][4595] dataplane_linux.go 68: Setting the host side veth name to calibfe158be2c2 ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.335103 containerd[1681]: 2024-08-05 22:25:03.292 [INFO][4595] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.335103 containerd[1681]: 2024-08-05 22:25:03.298 [INFO][4595] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0b91ce80-bd7b-476f-b330-517e59d21ca8", ResourceVersion:"685", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f", Pod:"coredns-5dd5756b68-vdt5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe158be2c2", MAC:"ee:55:20:7a:da:fd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.335103 containerd[1681]: 2024-08-05 22:25:03.328 [INFO][4595] k8s.go 500: Wrote updated endpoint to datastore ContainerID="f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f" Namespace="kube-system" Pod="coredns-5dd5756b68-vdt5j" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:03.419098 containerd[1681]: time="2024-08-05T22:25:03.419002295Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:03.419098 containerd[1681]: time="2024-08-05T22:25:03.419058096Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.419469 containerd[1681]: time="2024-08-05T22:25:03.419076696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:03.419469 containerd[1681]: time="2024-08-05T22:25:03.419300799Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.443861 systemd[1]: Started cri-containerd-f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f.scope - libcontainer container f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f. Aug 5 22:25:03.474269 systemd-networkd[1547]: cali8f10e908790: Link UP Aug 5 22:25:03.476562 systemd-networkd[1547]: cali8f10e908790: Gained carrier Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.206 [INFO][4636] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0 coredns-5dd5756b68- kube-system bd5c4120-5335-4525-afd6-e738b7da563e 688 0 2024-08-05 22:24:29 +0000 UTC map[k8s-app:kube-dns pod-template-hash:5dd5756b68 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4012.1.0-a-bfd2eb4520 coredns-5dd5756b68-fnwpf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8f10e908790 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.206 [INFO][4636] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.338 [INFO][4652] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" HandleID="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.370 [INFO][4652] ipam_plugin.go 264: Auto assigning IP ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" HandleID="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003b3470), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4012.1.0-a-bfd2eb4520", "pod":"coredns-5dd5756b68-fnwpf", "timestamp":"2024-08-05 22:25:03.338766983 +0000 UTC"}, Hostname:"ci-4012.1.0-a-bfd2eb4520", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.371 [INFO][4652] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.371 [INFO][4652] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.372 [INFO][4652] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-bfd2eb4520' Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.377 [INFO][4652] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.389 [INFO][4652] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.418 [INFO][4652] ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.427 [INFO][4652] ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.432 [INFO][4652] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.432 [INFO][4652] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.437 [INFO][4652] ipam.go 1685: Creating new handle: k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.447 [INFO][4652] ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.454 [INFO][4652] ipam.go 1216: Successfully claimed IPs: [192.168.112.130/26] block=192.168.112.128/26 handle="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.454 [INFO][4652] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.130/26] handle="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.455 [INFO][4652] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.499957 containerd[1681]: 2024-08-05 22:25:03.455 [INFO][4652] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.112.130/26] IPv6=[] ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" HandleID="k8s-pod-network.1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.501108 containerd[1681]: 2024-08-05 22:25:03.461 [INFO][4636] k8s.go 386: Populated endpoint ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bd5c4120-5335-4525-afd6-e738b7da563e", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"", Pod:"coredns-5dd5756b68-fnwpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f10e908790", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.501108 containerd[1681]: 2024-08-05 22:25:03.464 [INFO][4636] k8s.go 387: Calico CNI using IPs: [192.168.112.130/32] ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.501108 containerd[1681]: 2024-08-05 22:25:03.467 [INFO][4636] dataplane_linux.go 68: Setting the host side veth name to cali8f10e908790 ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.501108 containerd[1681]: 2024-08-05 22:25:03.477 [INFO][4636] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.501108 containerd[1681]: 2024-08-05 22:25:03.477 [INFO][4636] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bd5c4120-5335-4525-afd6-e738b7da563e", ResourceVersion:"688", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f", Pod:"coredns-5dd5756b68-fnwpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f10e908790", MAC:"ae:a2:51:3a:d2:40", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.501108 containerd[1681]: 2024-08-05 22:25:03.496 [INFO][4636] k8s.go 500: Wrote updated endpoint to datastore ContainerID="1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f" Namespace="kube-system" Pod="coredns-5dd5756b68-fnwpf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:03.540547 systemd-networkd[1547]: calic48e6e01b39: Link UP Aug 5 22:25:03.547135 systemd-networkd[1547]: calic48e6e01b39: Gained carrier Aug 5 22:25:03.558789 containerd[1681]: time="2024-08-05T22:25:03.558295280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-vdt5j,Uid:0b91ce80-bd7b-476f-b330-517e59d21ca8,Namespace:kube-system,Attempt:1,} returns sandbox id \"f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f\"" Aug 5 22:25:03.569034 containerd[1681]: time="2024-08-05T22:25:03.568017291Z" level=info msg="CreateContainer within sandbox \"f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.201 [INFO][4623] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0 calico-kube-controllers-c49f8cb95- calico-system d86c2908-ed3c-4609-9fcf-a967e5843ec5 686 0 2024-08-05 22:24:35 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:c49f8cb95 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4012.1.0-a-bfd2eb4520 calico-kube-controllers-c49f8cb95-5cpsf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calic48e6e01b39 [] []}} ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.204 [INFO][4623] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.339 [INFO][4653] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" HandleID="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.411 [INFO][4653] ipam_plugin.go 264: Auto assigning IP ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" HandleID="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003882d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.1.0-a-bfd2eb4520", "pod":"calico-kube-controllers-c49f8cb95-5cpsf", "timestamp":"2024-08-05 22:25:03.339600392 +0000 UTC"}, Hostname:"ci-4012.1.0-a-bfd2eb4520", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.411 [INFO][4653] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.454 [INFO][4653] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.454 [INFO][4653] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-bfd2eb4520' Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.458 [INFO][4653] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.467 [INFO][4653] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.482 [INFO][4653] ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.488 [INFO][4653] ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.492 [INFO][4653] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.492 [INFO][4653] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.498 [INFO][4653] ipam.go 1685: Creating new handle: k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.507 [INFO][4653] ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.518 [INFO][4653] ipam.go 1216: Successfully claimed IPs: [192.168.112.131/26] block=192.168.112.128/26 handle="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.519 [INFO][4653] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.131/26] handle="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.519 [INFO][4653] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.592773 containerd[1681]: 2024-08-05 22:25:03.519 [INFO][4653] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.112.131/26] IPv6=[] ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" HandleID="k8s-pod-network.a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.593798 containerd[1681]: 2024-08-05 22:25:03.524 [INFO][4623] k8s.go 386: Populated endpoint ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0", GenerateName:"calico-kube-controllers-c49f8cb95-", Namespace:"calico-system", SelfLink:"", UID:"d86c2908-ed3c-4609-9fcf-a967e5843ec5", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c49f8cb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"", Pod:"calico-kube-controllers-c49f8cb95-5cpsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic48e6e01b39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.593798 containerd[1681]: 2024-08-05 22:25:03.525 [INFO][4623] k8s.go 387: Calico CNI using IPs: [192.168.112.131/32] ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.593798 containerd[1681]: 2024-08-05 22:25:03.532 [INFO][4623] dataplane_linux.go 68: Setting the host side veth name to calic48e6e01b39 ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.593798 containerd[1681]: 2024-08-05 22:25:03.551 [INFO][4623] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.593798 containerd[1681]: 2024-08-05 22:25:03.561 [INFO][4623] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0", GenerateName:"calico-kube-controllers-c49f8cb95-", Namespace:"calico-system", SelfLink:"", UID:"d86c2908-ed3c-4609-9fcf-a967e5843ec5", ResourceVersion:"686", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c49f8cb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc", Pod:"calico-kube-controllers-c49f8cb95-5cpsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic48e6e01b39", MAC:"86:d2:44:e4:af:b1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.593798 containerd[1681]: 2024-08-05 22:25:03.587 [INFO][4623] k8s.go 500: Wrote updated endpoint to datastore ContainerID="a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc" Namespace="calico-system" Pod="calico-kube-controllers-c49f8cb95-5cpsf" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:03.598920 containerd[1681]: time="2024-08-05T22:25:03.598432837Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:03.599090 containerd[1681]: time="2024-08-05T22:25:03.598532338Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.599090 containerd[1681]: time="2024-08-05T22:25:03.598565038Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:03.599090 containerd[1681]: time="2024-08-05T22:25:03.598585838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.622416 containerd[1681]: time="2024-08-05T22:25:03.622377309Z" level=info msg="CreateContainer within sandbox \"f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"24c5eafb8aec409c9c1e7a40b919cd0c0798d436afcfb584ce50a6bfaa2050fc\"" Aug 5 22:25:03.627283 containerd[1681]: time="2024-08-05T22:25:03.627109863Z" level=info msg="StartContainer for \"24c5eafb8aec409c9c1e7a40b919cd0c0798d436afcfb584ce50a6bfaa2050fc\"" Aug 5 22:25:03.629842 systemd[1]: Started cri-containerd-1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f.scope - libcontainer container 1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f. Aug 5 22:25:03.653121 systemd-networkd[1547]: calia09598d961d: Link UP Aug 5 22:25:03.653338 systemd-networkd[1547]: calia09598d961d: Gained carrier Aug 5 22:25:03.691075 containerd[1681]: time="2024-08-05T22:25:03.690294882Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:03.691075 containerd[1681]: time="2024-08-05T22:25:03.690357182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.691075 containerd[1681]: time="2024-08-05T22:25:03.690383483Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:03.691075 containerd[1681]: time="2024-08-05T22:25:03.690402783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.215 [INFO][4608] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0 csi-node-driver- calico-system 5d7f5978-577b-47bc-9e09-7fc8851b40e1 687 0 2024-08-05 22:24:35 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:7d7f6c786c k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s ci-4012.1.0-a-bfd2eb4520 csi-node-driver-bg5zn eth0 default [] [] [kns.calico-system ksa.calico-system.default] calia09598d961d [] []}} ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.215 [INFO][4608] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.383 [INFO][4661] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" HandleID="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.424 [INFO][4661] ipam_plugin.go 264: Auto assigning IP ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" HandleID="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002b5a80), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4012.1.0-a-bfd2eb4520", "pod":"csi-node-driver-bg5zn", "timestamp":"2024-08-05 22:25:03.38337419 +0000 UTC"}, Hostname:"ci-4012.1.0-a-bfd2eb4520", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.424 [INFO][4661] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.521 [INFO][4661] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.522 [INFO][4661] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-bfd2eb4520' Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.525 [INFO][4661] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.565 [INFO][4661] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.575 [INFO][4661] ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.579 [INFO][4661] ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.590 [INFO][4661] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.590 [INFO][4661] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.594 [INFO][4661] ipam.go 1685: Creating new handle: k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774 Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.615 [INFO][4661] ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.641 [INFO][4661] ipam.go 1216: Successfully claimed IPs: [192.168.112.132/26] block=192.168.112.128/26 handle="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.641 [INFO][4661] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.132/26] handle="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.641 [INFO][4661] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:03.697571 containerd[1681]: 2024-08-05 22:25:03.641 [INFO][4661] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.112.132/26] IPv6=[] ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" HandleID="k8s-pod-network.18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.699662 containerd[1681]: 2024-08-05 22:25:03.648 [INFO][4608] k8s.go 386: Populated endpoint ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d7f5978-577b-47bc-9e09-7fc8851b40e1", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"", Pod:"csi-node-driver-bg5zn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia09598d961d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.699662 containerd[1681]: 2024-08-05 22:25:03.649 [INFO][4608] k8s.go 387: Calico CNI using IPs: [192.168.112.132/32] ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.699662 containerd[1681]: 2024-08-05 22:25:03.649 [INFO][4608] dataplane_linux.go 68: Setting the host side veth name to calia09598d961d ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.699662 containerd[1681]: 2024-08-05 22:25:03.652 [INFO][4608] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.699662 containerd[1681]: 2024-08-05 22:25:03.652 [INFO][4608] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d7f5978-577b-47bc-9e09-7fc8851b40e1", ResourceVersion:"687", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774", Pod:"csi-node-driver-bg5zn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia09598d961d", MAC:"52:00:a1:e0:17:08", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:03.699662 containerd[1681]: 2024-08-05 22:25:03.683 [INFO][4608] k8s.go 500: Wrote updated endpoint to datastore ContainerID="18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774" Namespace="calico-system" Pod="csi-node-driver-bg5zn" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:03.723691 systemd[1]: Started cri-containerd-24c5eafb8aec409c9c1e7a40b919cd0c0798d436afcfb584ce50a6bfaa2050fc.scope - libcontainer container 24c5eafb8aec409c9c1e7a40b919cd0c0798d436afcfb584ce50a6bfaa2050fc. Aug 5 22:25:03.739968 containerd[1681]: time="2024-08-05T22:25:03.739782345Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-fnwpf,Uid:bd5c4120-5335-4525-afd6-e738b7da563e,Namespace:kube-system,Attempt:1,} returns sandbox id \"1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f\"" Aug 5 22:25:03.748632 containerd[1681]: time="2024-08-05T22:25:03.748489344Z" level=info msg="CreateContainer within sandbox \"1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 5 22:25:03.767681 systemd[1]: Started cri-containerd-a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc.scope - libcontainer container a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc. Aug 5 22:25:03.775480 containerd[1681]: time="2024-08-05T22:25:03.775330249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:03.777972 containerd[1681]: time="2024-08-05T22:25:03.776188259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.777972 containerd[1681]: time="2024-08-05T22:25:03.776218359Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:03.777972 containerd[1681]: time="2024-08-05T22:25:03.776231959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:03.798300 containerd[1681]: time="2024-08-05T22:25:03.798208209Z" level=info msg="CreateContainer within sandbox \"1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8f7a714ab458c53343737ccc3b5d7d2c26671626bfbdddd8365a681bdbe1de45\"" Aug 5 22:25:03.807665 containerd[1681]: time="2024-08-05T22:25:03.806404502Z" level=info msg="StartContainer for \"8f7a714ab458c53343737ccc3b5d7d2c26671626bfbdddd8365a681bdbe1de45\"" Aug 5 22:25:03.825784 containerd[1681]: time="2024-08-05T22:25:03.825662822Z" level=info msg="StartContainer for \"24c5eafb8aec409c9c1e7a40b919cd0c0798d436afcfb584ce50a6bfaa2050fc\" returns successfully" Aug 5 22:25:03.837916 systemd[1]: Started cri-containerd-18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774.scope - libcontainer container 18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774. Aug 5 22:25:03.868955 systemd[1]: Started cri-containerd-8f7a714ab458c53343737ccc3b5d7d2c26671626bfbdddd8365a681bdbe1de45.scope - libcontainer container 8f7a714ab458c53343737ccc3b5d7d2c26671626bfbdddd8365a681bdbe1de45. Aug 5 22:25:03.921566 containerd[1681]: time="2024-08-05T22:25:03.921512712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-c49f8cb95-5cpsf,Uid:d86c2908-ed3c-4609-9fcf-a967e5843ec5,Namespace:calico-system,Attempt:1,} returns sandbox id \"a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc\"" Aug 5 22:25:03.931238 containerd[1681]: time="2024-08-05T22:25:03.930165710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\"" Aug 5 22:25:03.931238 containerd[1681]: time="2024-08-05T22:25:03.931051120Z" level=info msg="StartContainer for \"8f7a714ab458c53343737ccc3b5d7d2c26671626bfbdddd8365a681bdbe1de45\" returns successfully" Aug 5 22:25:03.975302 containerd[1681]: time="2024-08-05T22:25:03.975252923Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bg5zn,Uid:5d7f5978-577b-47bc-9e09-7fc8851b40e1,Namespace:calico-system,Attempt:1,} returns sandbox id \"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774\"" Aug 5 22:25:03.994626 kubelet[3220]: I0805 22:25:03.994595 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fnwpf" podStartSLOduration=34.994550643 podCreationTimestamp="2024-08-05 22:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:25:03.992857024 +0000 UTC m=+47.313987955" watchObservedRunningTime="2024-08-05 22:25:03.994550643 +0000 UTC m=+47.315681574" Aug 5 22:25:04.009609 kubelet[3220]: I0805 22:25:04.009138 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-vdt5j" podStartSLOduration=35.009093808 podCreationTimestamp="2024-08-05 22:24:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-08-05 22:25:04.008524002 +0000 UTC m=+47.329654933" watchObservedRunningTime="2024-08-05 22:25:04.009093808 +0000 UTC m=+47.330224739" Aug 5 22:25:04.683616 systemd-networkd[1547]: cali8f10e908790: Gained IPv6LL Aug 5 22:25:04.811674 systemd-networkd[1547]: calibfe158be2c2: Gained IPv6LL Aug 5 22:25:05.067733 systemd-networkd[1547]: calic48e6e01b39: Gained IPv6LL Aug 5 22:25:05.260637 systemd-networkd[1547]: calia09598d961d: Gained IPv6LL Aug 5 22:25:07.232754 containerd[1681]: time="2024-08-05T22:25:07.232700380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:07.235090 containerd[1681]: time="2024-08-05T22:25:07.234920905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.28.0: active requests=0, bytes read=33505793" Aug 5 22:25:07.239868 containerd[1681]: time="2024-08-05T22:25:07.239231954Z" level=info msg="ImageCreate event name:\"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:07.245410 containerd[1681]: time="2024-08-05T22:25:07.245303423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:07.246304 containerd[1681]: time="2024-08-05T22:25:07.246265334Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" with image id \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:c35e88abef622483409fff52313bf764a75095197be4c5a7c7830da342654de1\", size \"34953521\" in 3.316057423s" Aug 5 22:25:07.246387 containerd[1681]: time="2024-08-05T22:25:07.246311034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.28.0\" returns image reference \"sha256:428d92b02253980b402b9fb18f4cb58be36dc6bcf4893e07462732cb926ea783\"" Aug 5 22:25:07.247758 containerd[1681]: time="2024-08-05T22:25:07.247732151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\"" Aug 5 22:25:07.276200 containerd[1681]: time="2024-08-05T22:25:07.276128474Z" level=info msg="CreateContainer within sandbox \"a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Aug 5 22:25:07.317545 containerd[1681]: time="2024-08-05T22:25:07.317498144Z" level=info msg="CreateContainer within sandbox \"a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec\"" Aug 5 22:25:07.319643 containerd[1681]: time="2024-08-05T22:25:07.319289465Z" level=info msg="StartContainer for \"6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec\"" Aug 5 22:25:07.371957 systemd[1]: Started cri-containerd-6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec.scope - libcontainer container 6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec. Aug 5 22:25:07.424320 containerd[1681]: time="2024-08-05T22:25:07.424230058Z" level=info msg="StartContainer for \"6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec\" returns successfully" Aug 5 22:25:08.020500 kubelet[3220]: I0805 22:25:08.019650 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-c49f8cb95-5cpsf" podStartSLOduration=29.702654498 podCreationTimestamp="2024-08-05 22:24:35 +0000 UTC" firstStartedPulling="2024-08-05 22:25:03.929758406 +0000 UTC m=+47.250889437" lastFinishedPulling="2024-08-05 22:25:07.246702839 +0000 UTC m=+50.567833770" observedRunningTime="2024-08-05 22:25:08.010513128 +0000 UTC m=+51.331644159" watchObservedRunningTime="2024-08-05 22:25:08.019598831 +0000 UTC m=+51.340729762" Aug 5 22:25:08.254855 systemd[1]: run-containerd-runc-k8s.io-6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec-runc.vDhAj1.mount: Deactivated successfully. Aug 5 22:25:08.730663 containerd[1681]: time="2024-08-05T22:25:08.730615120Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:08.732859 containerd[1681]: time="2024-08-05T22:25:08.732805445Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.28.0: active requests=0, bytes read=7641062" Aug 5 22:25:08.739735 containerd[1681]: time="2024-08-05T22:25:08.739675423Z" level=info msg="ImageCreate event name:\"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:08.743565 containerd[1681]: time="2024-08-05T22:25:08.743511966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:08.744435 containerd[1681]: time="2024-08-05T22:25:08.744271975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.28.0\" with image id \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\", repo tag \"ghcr.io/flatcar/calico/csi:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:ac5f0089ad8eab325e5d16a59536f9292619adf16736b1554a439a66d543a63d\", size \"9088822\" in 1.494749004s" Aug 5 22:25:08.744435 containerd[1681]: time="2024-08-05T22:25:08.744310976Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.28.0\" returns image reference \"sha256:1a094aeaf1521e225668c83cbf63c0ec63afbdb8c4dd7c3d2aab0ec917d103de\"" Aug 5 22:25:08.746903 containerd[1681]: time="2024-08-05T22:25:08.746869305Z" level=info msg="CreateContainer within sandbox \"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 5 22:25:08.782648 containerd[1681]: time="2024-08-05T22:25:08.782604611Z" level=info msg="CreateContainer within sandbox \"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"2abeddce40fde2fe9e277d8249fed4a9ecc65dc2f5a941a19bb28180823efa94\"" Aug 5 22:25:08.783814 containerd[1681]: time="2024-08-05T22:25:08.783076417Z" level=info msg="StartContainer for \"2abeddce40fde2fe9e277d8249fed4a9ecc65dc2f5a941a19bb28180823efa94\"" Aug 5 22:25:08.830608 systemd[1]: Started cri-containerd-2abeddce40fde2fe9e277d8249fed4a9ecc65dc2f5a941a19bb28180823efa94.scope - libcontainer container 2abeddce40fde2fe9e277d8249fed4a9ecc65dc2f5a941a19bb28180823efa94. Aug 5 22:25:08.866327 containerd[1681]: time="2024-08-05T22:25:08.866277363Z" level=info msg="StartContainer for \"2abeddce40fde2fe9e277d8249fed4a9ecc65dc2f5a941a19bb28180823efa94\" returns successfully" Aug 5 22:25:08.869103 containerd[1681]: time="2024-08-05T22:25:08.869011494Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\"" Aug 5 22:25:10.672353 containerd[1681]: time="2024-08-05T22:25:10.672303300Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:10.674749 containerd[1681]: time="2024-08-05T22:25:10.674703727Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0: active requests=0, bytes read=10147655" Aug 5 22:25:10.678854 containerd[1681]: time="2024-08-05T22:25:10.678785673Z" level=info msg="ImageCreate event name:\"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:10.682883 containerd[1681]: time="2024-08-05T22:25:10.682821519Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:10.683659 containerd[1681]: time="2024-08-05T22:25:10.683513427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" with image id \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:b3caf3e7b3042b293728a5ab55d893798d60fec55993a9531e82997de0e534cc\", size \"11595367\" in 1.814460133s" Aug 5 22:25:10.683659 containerd[1681]: time="2024-08-05T22:25:10.683552928Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.28.0\" returns image reference \"sha256:0f80feca743f4a84ddda4057266092db9134f9af9e20e12ea6fcfe51d7e3a020\"" Aug 5 22:25:10.686254 containerd[1681]: time="2024-08-05T22:25:10.686041756Z" level=info msg="CreateContainer within sandbox \"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 5 22:25:10.724333 containerd[1681]: time="2024-08-05T22:25:10.724286791Z" level=info msg="CreateContainer within sandbox \"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"ce504c9fb40fe7abf993be26906cdd16fe7d454719d7625121a23c49f71050b6\"" Aug 5 22:25:10.725787 containerd[1681]: time="2024-08-05T22:25:10.725751807Z" level=info msg="StartContainer for \"ce504c9fb40fe7abf993be26906cdd16fe7d454719d7625121a23c49f71050b6\"" Aug 5 22:25:10.774119 systemd[1]: Started cri-containerd-ce504c9fb40fe7abf993be26906cdd16fe7d454719d7625121a23c49f71050b6.scope - libcontainer container ce504c9fb40fe7abf993be26906cdd16fe7d454719d7625121a23c49f71050b6. Aug 5 22:25:10.807743 containerd[1681]: time="2024-08-05T22:25:10.807367135Z" level=info msg="StartContainer for \"ce504c9fb40fe7abf993be26906cdd16fe7d454719d7625121a23c49f71050b6\" returns successfully" Aug 5 22:25:10.880919 kubelet[3220]: I0805 22:25:10.880181 3220 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 5 22:25:10.880919 kubelet[3220]: I0805 22:25:10.880217 3220 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 5 22:25:11.019236 kubelet[3220]: I0805 22:25:11.019198 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-bg5zn" podStartSLOduration=29.317699216 podCreationTimestamp="2024-08-05 22:24:35 +0000 UTC" firstStartedPulling="2024-08-05 22:25:03.982542106 +0000 UTC m=+47.303673037" lastFinishedPulling="2024-08-05 22:25:10.683995733 +0000 UTC m=+54.005126764" observedRunningTime="2024-08-05 22:25:11.018772839 +0000 UTC m=+54.339903870" watchObservedRunningTime="2024-08-05 22:25:11.019152943 +0000 UTC m=+54.340283874" Aug 5 22:25:16.778946 containerd[1681]: time="2024-08-05T22:25:16.778541829Z" level=info msg="StopPodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\"" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.812 [WARNING][5146] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d7f5978-577b-47bc-9e09-7fc8851b40e1", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774", Pod:"csi-node-driver-bg5zn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia09598d961d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.812 [INFO][5146] k8s.go 608: Cleaning up netns ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.812 [INFO][5146] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" iface="eth0" netns="" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.812 [INFO][5146] k8s.go 615: Releasing IP address(es) ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.812 [INFO][5146] utils.go 188: Calico CNI releasing IP address ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.834 [INFO][5153] ipam_plugin.go 411: Releasing address using handleID ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.838 [INFO][5153] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.838 [INFO][5153] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.842 [WARNING][5153] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.842 [INFO][5153] ipam_plugin.go 439: Releasing address using workloadID ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.844 [INFO][5153] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:16.846004 containerd[1681]: 2024-08-05 22:25:16.844 [INFO][5146] k8s.go 621: Teardown processing complete. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.846564 containerd[1681]: time="2024-08-05T22:25:16.846065297Z" level=info msg="TearDown network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" successfully" Aug 5 22:25:16.846564 containerd[1681]: time="2024-08-05T22:25:16.846102497Z" level=info msg="StopPodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" returns successfully" Aug 5 22:25:16.847097 containerd[1681]: time="2024-08-05T22:25:16.847066908Z" level=info msg="RemovePodSandbox for \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\"" Aug 5 22:25:16.847201 containerd[1681]: time="2024-08-05T22:25:16.847101708Z" level=info msg="Forcibly stopping sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\"" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.884 [WARNING][5171] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5d7f5978-577b-47bc-9e09-7fc8851b40e1", ResourceVersion:"767", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"7d7f6c786c", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"18c39a5a7755ec14ab25f12a8daee6ae767a54154970dc03ceac89dbb6785774", Pod:"csi-node-driver-bg5zn", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.112.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.default"}, InterfaceName:"calia09598d961d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.884 [INFO][5171] k8s.go 608: Cleaning up netns ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.884 [INFO][5171] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" iface="eth0" netns="" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.884 [INFO][5171] k8s.go 615: Releasing IP address(es) ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.885 [INFO][5171] utils.go 188: Calico CNI releasing IP address ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.903 [INFO][5178] ipam_plugin.go 411: Releasing address using handleID ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.903 [INFO][5178] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.903 [INFO][5178] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.910 [WARNING][5178] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.910 [INFO][5178] ipam_plugin.go 439: Releasing address using workloadID ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" HandleID="k8s-pod-network.eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-csi--node--driver--bg5zn-eth0" Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.911 [INFO][5178] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:16.913733 containerd[1681]: 2024-08-05 22:25:16.912 [INFO][5171] k8s.go 621: Teardown processing complete. ContainerID="eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1" Aug 5 22:25:16.914362 containerd[1681]: time="2024-08-05T22:25:16.913770166Z" level=info msg="TearDown network for sandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" successfully" Aug 5 22:25:16.921316 containerd[1681]: time="2024-08-05T22:25:16.921270752Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:16.921444 containerd[1681]: time="2024-08-05T22:25:16.921347653Z" level=info msg="RemovePodSandbox \"eb295bf7d1ebf9b4ccebee813e748af0fee352049cb05466dc494a70575d57a1\" returns successfully" Aug 5 22:25:16.921982 containerd[1681]: time="2024-08-05T22:25:16.921946459Z" level=info msg="StopPodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\"" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.952 [WARNING][5197] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0", GenerateName:"calico-kube-controllers-c49f8cb95-", Namespace:"calico-system", SelfLink:"", UID:"d86c2908-ed3c-4609-9fcf-a967e5843ec5", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c49f8cb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc", Pod:"calico-kube-controllers-c49f8cb95-5cpsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic48e6e01b39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.952 [INFO][5197] k8s.go 608: Cleaning up netns ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.952 [INFO][5197] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" iface="eth0" netns="" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.952 [INFO][5197] k8s.go 615: Releasing IP address(es) ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.952 [INFO][5197] utils.go 188: Calico CNI releasing IP address ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.971 [INFO][5204] ipam_plugin.go 411: Releasing address using handleID ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.972 [INFO][5204] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.972 [INFO][5204] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.976 [WARNING][5204] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.976 [INFO][5204] ipam_plugin.go 439: Releasing address using workloadID ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.978 [INFO][5204] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:16.980132 containerd[1681]: 2024-08-05 22:25:16.979 [INFO][5197] k8s.go 621: Teardown processing complete. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:16.980858 containerd[1681]: time="2024-08-05T22:25:16.980215522Z" level=info msg="TearDown network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" successfully" Aug 5 22:25:16.980858 containerd[1681]: time="2024-08-05T22:25:16.980246522Z" level=info msg="StopPodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" returns successfully" Aug 5 22:25:16.981414 containerd[1681]: time="2024-08-05T22:25:16.981074932Z" level=info msg="RemovePodSandbox for \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\"" Aug 5 22:25:16.981414 containerd[1681]: time="2024-08-05T22:25:16.981115432Z" level=info msg="Forcibly stopping sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\"" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.015 [WARNING][5222] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0", GenerateName:"calico-kube-controllers-c49f8cb95-", Namespace:"calico-system", SelfLink:"", UID:"d86c2908-ed3c-4609-9fcf-a967e5843ec5", ResourceVersion:"748", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"c49f8cb95", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"a055e9ad7a71f5efb7b4aa17d6d9b26e901d0723e098c64c0c3d933f21338cfc", Pod:"calico-kube-controllers-c49f8cb95-5cpsf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.112.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calic48e6e01b39", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.016 [INFO][5222] k8s.go 608: Cleaning up netns ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.016 [INFO][5222] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" iface="eth0" netns="" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.016 [INFO][5222] k8s.go 615: Releasing IP address(es) ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.016 [INFO][5222] utils.go 188: Calico CNI releasing IP address ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.047 [INFO][5228] ipam_plugin.go 411: Releasing address using handleID ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.047 [INFO][5228] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.047 [INFO][5228] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.052 [WARNING][5228] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.052 [INFO][5228] ipam_plugin.go 439: Releasing address using workloadID ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" HandleID="k8s-pod-network.502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--kube--controllers--c49f8cb95--5cpsf-eth0" Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.054 [INFO][5228] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:17.056191 containerd[1681]: 2024-08-05 22:25:17.055 [INFO][5222] k8s.go 621: Teardown processing complete. ContainerID="502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008" Aug 5 22:25:17.057491 containerd[1681]: time="2024-08-05T22:25:17.056880994Z" level=info msg="TearDown network for sandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" successfully" Aug 5 22:25:17.064418 containerd[1681]: time="2024-08-05T22:25:17.064159876Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:17.064620 containerd[1681]: time="2024-08-05T22:25:17.064587681Z" level=info msg="RemovePodSandbox \"502a5e4c18ec8da57078003a780263e53083bab3dae69c25f55e4a17e86b0008\" returns successfully" Aug 5 22:25:17.065060 containerd[1681]: time="2024-08-05T22:25:17.065030186Z" level=info msg="StopPodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\"" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.095 [WARNING][5247] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0b91ce80-bd7b-476f-b330-517e59d21ca8", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f", Pod:"coredns-5dd5756b68-vdt5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe158be2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.095 [INFO][5247] k8s.go 608: Cleaning up netns ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.095 [INFO][5247] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" iface="eth0" netns="" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.095 [INFO][5247] k8s.go 615: Releasing IP address(es) ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.095 [INFO][5247] utils.go 188: Calico CNI releasing IP address ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.114 [INFO][5254] ipam_plugin.go 411: Releasing address using handleID ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.115 [INFO][5254] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.115 [INFO][5254] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.119 [WARNING][5254] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.119 [INFO][5254] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.121 [INFO][5254] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:17.123170 containerd[1681]: 2024-08-05 22:25:17.122 [INFO][5247] k8s.go 621: Teardown processing complete. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.124022 containerd[1681]: time="2024-08-05T22:25:17.123232148Z" level=info msg="TearDown network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" successfully" Aug 5 22:25:17.124022 containerd[1681]: time="2024-08-05T22:25:17.123263048Z" level=info msg="StopPodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" returns successfully" Aug 5 22:25:17.124022 containerd[1681]: time="2024-08-05T22:25:17.123896956Z" level=info msg="RemovePodSandbox for \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\"" Aug 5 22:25:17.124022 containerd[1681]: time="2024-08-05T22:25:17.123988457Z" level=info msg="Forcibly stopping sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\"" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.156 [WARNING][5272] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"0b91ce80-bd7b-476f-b330-517e59d21ca8", ResourceVersion:"722", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"f251730dce57261ef93ba7766282adf321b274e1d455d95bc698ea4df8523e3f", Pod:"coredns-5dd5756b68-vdt5j", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calibfe158be2c2", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.156 [INFO][5272] k8s.go 608: Cleaning up netns ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.157 [INFO][5272] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" iface="eth0" netns="" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.157 [INFO][5272] k8s.go 615: Releasing IP address(es) ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.157 [INFO][5272] utils.go 188: Calico CNI releasing IP address ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.175 [INFO][5278] ipam_plugin.go 411: Releasing address using handleID ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.175 [INFO][5278] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.176 [INFO][5278] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.180 [WARNING][5278] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.180 [INFO][5278] ipam_plugin.go 439: Releasing address using workloadID ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" HandleID="k8s-pod-network.d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--vdt5j-eth0" Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.181 [INFO][5278] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:17.183673 containerd[1681]: 2024-08-05 22:25:17.182 [INFO][5272] k8s.go 621: Teardown processing complete. ContainerID="d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925" Aug 5 22:25:17.184323 containerd[1681]: time="2024-08-05T22:25:17.183714436Z" level=info msg="TearDown network for sandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" successfully" Aug 5 22:25:17.191058 containerd[1681]: time="2024-08-05T22:25:17.191021019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:17.191194 containerd[1681]: time="2024-08-05T22:25:17.191088020Z" level=info msg="RemovePodSandbox \"d0eab79645c67a871eb39fe546be69e1351d70c2e7ae75fb364e444f39128925\" returns successfully" Aug 5 22:25:17.191582 containerd[1681]: time="2024-08-05T22:25:17.191548225Z" level=info msg="StopPodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\"" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.221 [WARNING][5296] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bd5c4120-5335-4525-afd6-e738b7da563e", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f", Pod:"coredns-5dd5756b68-fnwpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f10e908790", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.221 [INFO][5296] k8s.go 608: Cleaning up netns ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.221 [INFO][5296] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" iface="eth0" netns="" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.221 [INFO][5296] k8s.go 615: Releasing IP address(es) ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.221 [INFO][5296] utils.go 188: Calico CNI releasing IP address ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.240 [INFO][5302] ipam_plugin.go 411: Releasing address using handleID ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.240 [INFO][5302] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.240 [INFO][5302] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.245 [WARNING][5302] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.245 [INFO][5302] ipam_plugin.go 439: Releasing address using workloadID ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.246 [INFO][5302] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:17.248919 containerd[1681]: 2024-08-05 22:25:17.247 [INFO][5296] k8s.go 621: Teardown processing complete. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.249539 containerd[1681]: time="2024-08-05T22:25:17.248974278Z" level=info msg="TearDown network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" successfully" Aug 5 22:25:17.249539 containerd[1681]: time="2024-08-05T22:25:17.249006178Z" level=info msg="StopPodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" returns successfully" Aug 5 22:25:17.250121 containerd[1681]: time="2024-08-05T22:25:17.249756386Z" level=info msg="RemovePodSandbox for \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\"" Aug 5 22:25:17.250121 containerd[1681]: time="2024-08-05T22:25:17.249794487Z" level=info msg="Forcibly stopping sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\"" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.284 [WARNING][5320] k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0", GenerateName:"coredns-5dd5756b68-", Namespace:"kube-system", SelfLink:"", UID:"bd5c4120-5335-4525-afd6-e738b7da563e", ResourceVersion:"726", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 24, 29, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"5dd5756b68", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"1f9d4b5fed86fb569ff1171b999788e2f78b24addbafd97e5f0304bd6d4df61f", Pod:"coredns-5dd5756b68-fnwpf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.112.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8f10e908790", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.284 [INFO][5320] k8s.go 608: Cleaning up netns ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.284 [INFO][5320] dataplane_linux.go 526: CleanUpNamespace called with no netns name, ignoring. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" iface="eth0" netns="" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.284 [INFO][5320] k8s.go 615: Releasing IP address(es) ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.284 [INFO][5320] utils.go 188: Calico CNI releasing IP address ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.303 [INFO][5327] ipam_plugin.go 411: Releasing address using handleID ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.303 [INFO][5327] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.303 [INFO][5327] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.308 [WARNING][5327] ipam_plugin.go 428: Asked to release address but it doesn't exist. Ignoring ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.308 [INFO][5327] ipam_plugin.go 439: Releasing address using workloadID ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" HandleID="k8s-pod-network.31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-coredns--5dd5756b68--fnwpf-eth0" Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.309 [INFO][5327] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:17.313311 containerd[1681]: 2024-08-05 22:25:17.310 [INFO][5320] k8s.go 621: Teardown processing complete. ContainerID="31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37" Aug 5 22:25:17.313311 containerd[1681]: time="2024-08-05T22:25:17.311736791Z" level=info msg="TearDown network for sandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" successfully" Aug 5 22:25:17.319532 containerd[1681]: time="2024-08-05T22:25:17.319489179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Aug 5 22:25:17.319625 containerd[1681]: time="2024-08-05T22:25:17.319559880Z" level=info msg="RemovePodSandbox \"31df9667b8d178a39785da5f3e2d28f140911d73cce569cab9a543d8c8c1ad37\" returns successfully" Aug 5 22:25:22.359069 systemd[1]: run-containerd-runc-k8s.io-4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300-runc.UKZyRP.mount: Deactivated successfully. Aug 5 22:25:25.505843 kubelet[3220]: I0805 22:25:25.505733 3220 topology_manager.go:215] "Topology Admit Handler" podUID="3937ebd9-eaa7-49d0-8776-9d8d96c8db1a" podNamespace="calico-apiserver" podName="calico-apiserver-64d458b5d7-44f4x" Aug 5 22:25:25.518999 systemd[1]: Created slice kubepods-besteffort-pod3937ebd9_eaa7_49d0_8776_9d8d96c8db1a.slice - libcontainer container kubepods-besteffort-pod3937ebd9_eaa7_49d0_8776_9d8d96c8db1a.slice. Aug 5 22:25:25.530420 kubelet[3220]: I0805 22:25:25.529899 3220 topology_manager.go:215] "Topology Admit Handler" podUID="456cd463-429b-4662-9c93-ba6403fdd667" podNamespace="calico-apiserver" podName="calico-apiserver-64d458b5d7-rpl4w" Aug 5 22:25:25.541210 systemd[1]: Created slice kubepods-besteffort-pod456cd463_429b_4662_9c93_ba6403fdd667.slice - libcontainer container kubepods-besteffort-pod456cd463_429b_4662_9c93_ba6403fdd667.slice. Aug 5 22:25:25.587825 kubelet[3220]: I0805 22:25:25.587598 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3937ebd9-eaa7-49d0-8776-9d8d96c8db1a-calico-apiserver-certs\") pod \"calico-apiserver-64d458b5d7-44f4x\" (UID: \"3937ebd9-eaa7-49d0-8776-9d8d96c8db1a\") " pod="calico-apiserver/calico-apiserver-64d458b5d7-44f4x" Aug 5 22:25:25.588364 kubelet[3220]: I0805 22:25:25.587785 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kpfl\" (UniqueName: \"kubernetes.io/projected/456cd463-429b-4662-9c93-ba6403fdd667-kube-api-access-7kpfl\") pod \"calico-apiserver-64d458b5d7-rpl4w\" (UID: \"456cd463-429b-4662-9c93-ba6403fdd667\") " pod="calico-apiserver/calico-apiserver-64d458b5d7-rpl4w" Aug 5 22:25:25.588364 kubelet[3220]: I0805 22:25:25.588086 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnvcv\" (UniqueName: \"kubernetes.io/projected/3937ebd9-eaa7-49d0-8776-9d8d96c8db1a-kube-api-access-qnvcv\") pod \"calico-apiserver-64d458b5d7-44f4x\" (UID: \"3937ebd9-eaa7-49d0-8776-9d8d96c8db1a\") " pod="calico-apiserver/calico-apiserver-64d458b5d7-44f4x" Aug 5 22:25:25.588579 kubelet[3220]: I0805 22:25:25.588112 3220 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/456cd463-429b-4662-9c93-ba6403fdd667-calico-apiserver-certs\") pod \"calico-apiserver-64d458b5d7-rpl4w\" (UID: \"456cd463-429b-4662-9c93-ba6403fdd667\") " pod="calico-apiserver/calico-apiserver-64d458b5d7-rpl4w" Aug 5 22:25:25.689498 kubelet[3220]: E0805 22:25:25.689041 3220 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:25:25.689498 kubelet[3220]: E0805 22:25:25.689123 3220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3937ebd9-eaa7-49d0-8776-9d8d96c8db1a-calico-apiserver-certs podName:3937ebd9-eaa7-49d0-8776-9d8d96c8db1a nodeName:}" failed. No retries permitted until 2024-08-05 22:25:26.189100998 +0000 UTC m=+69.510231929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/3937ebd9-eaa7-49d0-8776-9d8d96c8db1a-calico-apiserver-certs") pod "calico-apiserver-64d458b5d7-44f4x" (UID: "3937ebd9-eaa7-49d0-8776-9d8d96c8db1a") : secret "calico-apiserver-certs" not found Aug 5 22:25:25.689498 kubelet[3220]: E0805 22:25:25.689314 3220 secret.go:194] Couldn't get secret calico-apiserver/calico-apiserver-certs: secret "calico-apiserver-certs" not found Aug 5 22:25:25.689498 kubelet[3220]: E0805 22:25:25.689359 3220 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/456cd463-429b-4662-9c93-ba6403fdd667-calico-apiserver-certs podName:456cd463-429b-4662-9c93-ba6403fdd667 nodeName:}" failed. No retries permitted until 2024-08-05 22:25:26.189344701 +0000 UTC m=+69.510475632 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "calico-apiserver-certs" (UniqueName: "kubernetes.io/secret/456cd463-429b-4662-9c93-ba6403fdd667-calico-apiserver-certs") pod "calico-apiserver-64d458b5d7-rpl4w" (UID: "456cd463-429b-4662-9c93-ba6403fdd667") : secret "calico-apiserver-certs" not found Aug 5 22:25:26.425735 containerd[1681]: time="2024-08-05T22:25:26.425692864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64d458b5d7-44f4x,Uid:3937ebd9-eaa7-49d0-8776-9d8d96c8db1a,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:25:26.446722 containerd[1681]: time="2024-08-05T22:25:26.446442500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64d458b5d7-rpl4w,Uid:456cd463-429b-4662-9c93-ba6403fdd667,Namespace:calico-apiserver,Attempt:0,}" Aug 5 22:25:28.137232 systemd-networkd[1547]: cali9fd19933e3c: Link UP Aug 5 22:25:28.137843 systemd-networkd[1547]: cali9fd19933e3c: Gained carrier Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.051 [INFO][5404] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0 calico-apiserver-64d458b5d7- calico-apiserver 3937ebd9-eaa7-49d0-8776-9d8d96c8db1a 837 0 2024-08-05 22:25:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64d458b5d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.1.0-a-bfd2eb4520 calico-apiserver-64d458b5d7-44f4x eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9fd19933e3c [] []}} ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.052 [INFO][5404] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.098 [INFO][5420] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" HandleID="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.109 [INFO][5420] ipam_plugin.go 264: Auto assigning IP ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" HandleID="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001149b0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.1.0-a-bfd2eb4520", "pod":"calico-apiserver-64d458b5d7-44f4x", "timestamp":"2024-08-05 22:25:28.098845669 +0000 UTC"}, Hostname:"ci-4012.1.0-a-bfd2eb4520", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.110 [INFO][5420] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.110 [INFO][5420] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.110 [INFO][5420] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-bfd2eb4520' Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.111 [INFO][5420] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.114 [INFO][5420] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.117 [INFO][5420] ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.119 [INFO][5420] ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.120 [INFO][5420] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.120 [INFO][5420] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.122 [INFO][5420] ipam.go 1685: Creating new handle: k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4 Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.124 [INFO][5420] ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.130 [INFO][5420] ipam.go 1216: Successfully claimed IPs: [192.168.112.133/26] block=192.168.112.128/26 handle="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.130 [INFO][5420] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.133/26] handle="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.130 [INFO][5420] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:28.156496 containerd[1681]: 2024-08-05 22:25:28.130 [INFO][5420] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.112.133/26] IPv6=[] ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" HandleID="k8s-pod-network.31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.158199 containerd[1681]: 2024-08-05 22:25:28.133 [INFO][5404] k8s.go 386: Populated endpoint ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0", GenerateName:"calico-apiserver-64d458b5d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3937ebd9-eaa7-49d0-8776-9d8d96c8db1a", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64d458b5d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"", Pod:"calico-apiserver-64d458b5d7-44f4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fd19933e3c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:28.158199 containerd[1681]: 2024-08-05 22:25:28.133 [INFO][5404] k8s.go 387: Calico CNI using IPs: [192.168.112.133/32] ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.158199 containerd[1681]: 2024-08-05 22:25:28.133 [INFO][5404] dataplane_linux.go 68: Setting the host side veth name to cali9fd19933e3c ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.158199 containerd[1681]: 2024-08-05 22:25:28.138 [INFO][5404] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.158199 containerd[1681]: 2024-08-05 22:25:28.140 [INFO][5404] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0", GenerateName:"calico-apiserver-64d458b5d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"3937ebd9-eaa7-49d0-8776-9d8d96c8db1a", ResourceVersion:"837", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64d458b5d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4", Pod:"calico-apiserver-64d458b5d7-44f4x", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fd19933e3c", MAC:"06:ca:9d:b8:f2:09", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:28.158199 containerd[1681]: 2024-08-05 22:25:28.152 [INFO][5404] k8s.go 500: Wrote updated endpoint to datastore ContainerID="31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-44f4x" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--44f4x-eth0" Aug 5 22:25:28.196884 systemd-networkd[1547]: caliccfbe42c1c0: Link UP Aug 5 22:25:28.197722 systemd-networkd[1547]: caliccfbe42c1c0: Gained carrier Aug 5 22:25:28.204487 containerd[1681]: time="2024-08-05T22:25:28.202087441Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:28.204638 containerd[1681]: time="2024-08-05T22:25:28.203003352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:28.204638 containerd[1681]: time="2024-08-05T22:25:28.203037652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:28.204638 containerd[1681]: time="2024-08-05T22:25:28.203058952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.054 [INFO][5396] plugin.go 326: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0 calico-apiserver-64d458b5d7- calico-apiserver 456cd463-429b-4662-9c93-ba6403fdd667 841 0 2024-08-05 22:25:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:64d458b5d7 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4012.1.0-a-bfd2eb4520 calico-apiserver-64d458b5d7-rpl4w eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliccfbe42c1c0 [] []}} ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.054 [INFO][5396] k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.099 [INFO][5424] ipam_plugin.go 224: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" HandleID="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.110 [INFO][5424] ipam_plugin.go 264: Auto assigning IP ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" HandleID="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318d70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4012.1.0-a-bfd2eb4520", "pod":"calico-apiserver-64d458b5d7-rpl4w", "timestamp":"2024-08-05 22:25:28.09982058 +0000 UTC"}, Hostname:"ci-4012.1.0-a-bfd2eb4520", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.111 [INFO][5424] ipam_plugin.go 352: About to acquire host-wide IPAM lock. Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.132 [INFO][5424] ipam_plugin.go 367: Acquired host-wide IPAM lock. Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.132 [INFO][5424] ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4012.1.0-a-bfd2eb4520' Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.135 [INFO][5424] ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.150 [INFO][5424] ipam.go 372: Looking up existing affinities for host host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.161 [INFO][5424] ipam.go 489: Trying affinity for 192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.168 [INFO][5424] ipam.go 155: Attempting to load block cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.171 [INFO][5424] ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.112.128/26 host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.171 [INFO][5424] ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.112.128/26 handle="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.173 [INFO][5424] ipam.go 1685: Creating new handle: k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.178 [INFO][5424] ipam.go 1203: Writing block in order to claim IPs block=192.168.112.128/26 handle="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.188 [INFO][5424] ipam.go 1216: Successfully claimed IPs: [192.168.112.134/26] block=192.168.112.128/26 handle="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.188 [INFO][5424] ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.112.134/26] handle="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" host="ci-4012.1.0-a-bfd2eb4520" Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.190 [INFO][5424] ipam_plugin.go 373: Released host-wide IPAM lock. Aug 5 22:25:28.227572 containerd[1681]: 2024-08-05 22:25:28.190 [INFO][5424] ipam_plugin.go 282: Calico CNI IPAM assigned addresses IPv4=[192.168.112.134/26] IPv6=[] ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" HandleID="k8s-pod-network.32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Workload="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.228413 containerd[1681]: 2024-08-05 22:25:28.193 [INFO][5396] k8s.go 386: Populated endpoint ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0", GenerateName:"calico-apiserver-64d458b5d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"456cd463-429b-4662-9c93-ba6403fdd667", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64d458b5d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"", Pod:"calico-apiserver-64d458b5d7-rpl4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccfbe42c1c0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:28.228413 containerd[1681]: 2024-08-05 22:25:28.193 [INFO][5396] k8s.go 387: Calico CNI using IPs: [192.168.112.134/32] ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.228413 containerd[1681]: 2024-08-05 22:25:28.194 [INFO][5396] dataplane_linux.go 68: Setting the host side veth name to caliccfbe42c1c0 ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.228413 containerd[1681]: 2024-08-05 22:25:28.198 [INFO][5396] dataplane_linux.go 479: Disabling IPv4 forwarding ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.228413 containerd[1681]: 2024-08-05 22:25:28.198 [INFO][5396] k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0", GenerateName:"calico-apiserver-64d458b5d7-", Namespace:"calico-apiserver", SelfLink:"", UID:"456cd463-429b-4662-9c93-ba6403fdd667", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2024, time.August, 5, 22, 25, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"64d458b5d7", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4012.1.0-a-bfd2eb4520", ContainerID:"32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde", Pod:"calico-apiserver-64d458b5d7-rpl4w", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.112.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliccfbe42c1c0", MAC:"1a:b1:7a:77:c5:eb", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Aug 5 22:25:28.228413 containerd[1681]: 2024-08-05 22:25:28.210 [INFO][5396] k8s.go 500: Wrote updated endpoint to datastore ContainerID="32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde" Namespace="calico-apiserver" Pod="calico-apiserver-64d458b5d7-rpl4w" WorkloadEndpoint="ci--4012.1.0--a--bfd2eb4520-k8s-calico--apiserver--64d458b5d7--rpl4w-eth0" Aug 5 22:25:28.273641 systemd[1]: Started cri-containerd-31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4.scope - libcontainer container 31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4. Aug 5 22:25:28.296246 containerd[1681]: time="2024-08-05T22:25:28.296158310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 5 22:25:28.296405 containerd[1681]: time="2024-08-05T22:25:28.296265811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:28.296405 containerd[1681]: time="2024-08-05T22:25:28.296301011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 5 22:25:28.296405 containerd[1681]: time="2024-08-05T22:25:28.296314111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 5 22:25:28.338616 systemd[1]: Started cri-containerd-32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde.scope - libcontainer container 32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde. Aug 5 22:25:28.365839 containerd[1681]: time="2024-08-05T22:25:28.365777800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64d458b5d7-44f4x,Uid:3937ebd9-eaa7-49d0-8776-9d8d96c8db1a,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4\"" Aug 5 22:25:28.369162 containerd[1681]: time="2024-08-05T22:25:28.368973937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:25:28.400750 containerd[1681]: time="2024-08-05T22:25:28.400566496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-64d458b5d7-rpl4w,Uid:456cd463-429b-4662-9c93-ba6403fdd667,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde\"" Aug 5 22:25:29.195645 systemd-networkd[1547]: cali9fd19933e3c: Gained IPv6LL Aug 5 22:25:29.579802 systemd-networkd[1547]: caliccfbe42c1c0: Gained IPv6LL Aug 5 22:25:31.431365 containerd[1681]: time="2024-08-05T22:25:31.431310619Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:31.433221 containerd[1681]: time="2024-08-05T22:25:31.433160140Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=40421260" Aug 5 22:25:31.436757 containerd[1681]: time="2024-08-05T22:25:31.436700781Z" level=info msg="ImageCreate event name:\"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:31.441485 containerd[1681]: time="2024-08-05T22:25:31.441301033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:31.442194 containerd[1681]: time="2024-08-05T22:25:31.442137842Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 3.073127805s" Aug 5 22:25:31.442394 containerd[1681]: time="2024-08-05T22:25:31.442298244Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:25:31.443317 containerd[1681]: time="2024-08-05T22:25:31.443036553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\"" Aug 5 22:25:31.445299 containerd[1681]: time="2024-08-05T22:25:31.445269778Z" level=info msg="CreateContainer within sandbox \"31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:25:31.482294 containerd[1681]: time="2024-08-05T22:25:31.482252498Z" level=info msg="CreateContainer within sandbox \"31fa564469e1b09e24a50cbb419852c30e0427bb88e7209553a5657d95044fb4\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d6ee4dd1ef277a3d1aa4c53f0bb83aa19cb43071ed74e077206ab53c8ac4b52f\"" Aug 5 22:25:31.484317 containerd[1681]: time="2024-08-05T22:25:31.482876105Z" level=info msg="StartContainer for \"d6ee4dd1ef277a3d1aa4c53f0bb83aa19cb43071ed74e077206ab53c8ac4b52f\"" Aug 5 22:25:31.518590 systemd[1]: run-containerd-runc-k8s.io-d6ee4dd1ef277a3d1aa4c53f0bb83aa19cb43071ed74e077206ab53c8ac4b52f-runc.X6iRPd.mount: Deactivated successfully. Aug 5 22:25:31.524631 systemd[1]: Started cri-containerd-d6ee4dd1ef277a3d1aa4c53f0bb83aa19cb43071ed74e077206ab53c8ac4b52f.scope - libcontainer container d6ee4dd1ef277a3d1aa4c53f0bb83aa19cb43071ed74e077206ab53c8ac4b52f. Aug 5 22:25:31.569754 containerd[1681]: time="2024-08-05T22:25:31.569710891Z" level=info msg="StartContainer for \"d6ee4dd1ef277a3d1aa4c53f0bb83aa19cb43071ed74e077206ab53c8ac4b52f\" returns successfully" Aug 5 22:25:31.775194 containerd[1681]: time="2024-08-05T22:25:31.774279115Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.28.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 5 22:25:31.776488 containerd[1681]: time="2024-08-05T22:25:31.776427239Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.28.0: active requests=0, bytes read=77" Aug 5 22:25:31.779263 containerd[1681]: time="2024-08-05T22:25:31.779225571Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" with image id \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.28.0\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:e8f124312a4c41451e51bfc00b6e98929e9eb0510905f3301542719a3e8d2fec\", size \"41869036\" in 336.152818ms" Aug 5 22:25:31.779386 containerd[1681]: time="2024-08-05T22:25:31.779368473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.28.0\" returns image reference \"sha256:6c07591fd1cfafb48d575f75a6b9d8d3cc03bead5b684908ef5e7dd3132794d6\"" Aug 5 22:25:31.782187 containerd[1681]: time="2024-08-05T22:25:31.782162205Z" level=info msg="CreateContainer within sandbox \"32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Aug 5 22:25:31.819244 containerd[1681]: time="2024-08-05T22:25:31.819201725Z" level=info msg="CreateContainer within sandbox \"32d396060e44195b9460592a99438d5caa1c0e1b6436e90942ba2d8ba359ebde\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"4eb33ac0736324a3acd7e98dfaec4a3f23d62a1842f5875e8d3cbb0180f0f5dc\"" Aug 5 22:25:31.820382 containerd[1681]: time="2024-08-05T22:25:31.820345438Z" level=info msg="StartContainer for \"4eb33ac0736324a3acd7e98dfaec4a3f23d62a1842f5875e8d3cbb0180f0f5dc\"" Aug 5 22:25:31.857107 systemd[1]: Started cri-containerd-4eb33ac0736324a3acd7e98dfaec4a3f23d62a1842f5875e8d3cbb0180f0f5dc.scope - libcontainer container 4eb33ac0736324a3acd7e98dfaec4a3f23d62a1842f5875e8d3cbb0180f0f5dc. Aug 5 22:25:31.919905 containerd[1681]: time="2024-08-05T22:25:31.919753767Z" level=info msg="StartContainer for \"4eb33ac0736324a3acd7e98dfaec4a3f23d62a1842f5875e8d3cbb0180f0f5dc\" returns successfully" Aug 5 22:25:32.095442 kubelet[3220]: I0805 22:25:32.095316 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64d458b5d7-rpl4w" podStartSLOduration=3.718191503 podCreationTimestamp="2024-08-05 22:25:25 +0000 UTC" firstStartedPulling="2024-08-05 22:25:28.402638219 +0000 UTC m=+71.723769150" lastFinishedPulling="2024-08-05 22:25:31.779717977 +0000 UTC m=+75.100849008" observedRunningTime="2024-08-05 22:25:32.093491741 +0000 UTC m=+75.414622672" watchObservedRunningTime="2024-08-05 22:25:32.095271361 +0000 UTC m=+75.416402292" Aug 5 22:25:32.111306 kubelet[3220]: I0805 22:25:32.111253 3220 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-64d458b5d7-44f4x" podStartSLOduration=4.03660132 podCreationTimestamp="2024-08-05 22:25:25 +0000 UTC" firstStartedPulling="2024-08-05 22:25:28.368213328 +0000 UTC m=+71.689344359" lastFinishedPulling="2024-08-05 22:25:31.44281575 +0000 UTC m=+74.763946681" observedRunningTime="2024-08-05 22:25:32.110609735 +0000 UTC m=+75.431740666" watchObservedRunningTime="2024-08-05 22:25:32.111203642 +0000 UTC m=+75.432334573" Aug 5 22:25:32.478906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount463527250.mount: Deactivated successfully. Aug 5 22:26:17.905120 systemd[1]: Started sshd@7-10.200.4.17:22-10.200.16.10:43302.service - OpenSSH per-connection server daemon (10.200.16.10:43302). Aug 5 22:26:18.503936 sshd[5742]: Accepted publickey for core from 10.200.16.10 port 43302 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:18.505432 sshd[5742]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:18.510515 systemd-logind[1650]: New session 10 of user core. Aug 5 22:26:18.515618 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 5 22:26:19.046626 sshd[5742]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:19.050703 systemd-logind[1650]: Session 10 logged out. Waiting for processes to exit. Aug 5 22:26:19.051591 systemd[1]: sshd@7-10.200.4.17:22-10.200.16.10:43302.service: Deactivated successfully. Aug 5 22:26:19.053868 systemd[1]: session-10.scope: Deactivated successfully. Aug 5 22:26:19.054820 systemd-logind[1650]: Removed session 10. Aug 5 22:26:24.157780 systemd[1]: Started sshd@8-10.200.4.17:22-10.200.16.10:59908.service - OpenSSH per-connection server daemon (10.200.16.10:59908). Aug 5 22:26:24.745323 sshd[5807]: Accepted publickey for core from 10.200.16.10 port 59908 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:24.746885 sshd[5807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:24.751786 systemd-logind[1650]: New session 11 of user core. Aug 5 22:26:24.760013 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 5 22:26:25.222083 sshd[5807]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:25.225887 systemd-logind[1650]: Session 11 logged out. Waiting for processes to exit. Aug 5 22:26:25.226849 systemd[1]: sshd@8-10.200.4.17:22-10.200.16.10:59908.service: Deactivated successfully. Aug 5 22:26:25.228987 systemd[1]: session-11.scope: Deactivated successfully. Aug 5 22:26:25.230279 systemd-logind[1650]: Removed session 11. Aug 5 22:26:30.331814 systemd[1]: Started sshd@9-10.200.4.17:22-10.200.16.10:32992.service - OpenSSH per-connection server daemon (10.200.16.10:32992). Aug 5 22:26:30.946016 sshd[5822]: Accepted publickey for core from 10.200.16.10 port 32992 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:30.947581 sshd[5822]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:30.952306 systemd-logind[1650]: New session 12 of user core. Aug 5 22:26:30.957742 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 5 22:26:31.427750 sshd[5822]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:31.430882 systemd[1]: sshd@9-10.200.4.17:22-10.200.16.10:32992.service: Deactivated successfully. Aug 5 22:26:31.433380 systemd[1]: session-12.scope: Deactivated successfully. Aug 5 22:26:31.434922 systemd-logind[1650]: Session 12 logged out. Waiting for processes to exit. Aug 5 22:26:31.436567 systemd-logind[1650]: Removed session 12. Aug 5 22:26:31.540098 systemd[1]: Started sshd@10-10.200.4.17:22-10.200.16.10:32994.service - OpenSSH per-connection server daemon (10.200.16.10:32994). Aug 5 22:26:32.125947 sshd[5838]: Accepted publickey for core from 10.200.16.10 port 32994 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:32.127350 sshd[5838]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:32.132898 systemd-logind[1650]: New session 13 of user core. Aug 5 22:26:32.139640 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 5 22:26:33.253896 sshd[5838]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:33.257325 systemd-logind[1650]: Session 13 logged out. Waiting for processes to exit. Aug 5 22:26:33.260487 systemd[1]: sshd@10-10.200.4.17:22-10.200.16.10:32994.service: Deactivated successfully. Aug 5 22:26:33.263578 systemd[1]: session-13.scope: Deactivated successfully. Aug 5 22:26:33.265856 systemd-logind[1650]: Removed session 13. Aug 5 22:26:33.361030 systemd[1]: Started sshd@11-10.200.4.17:22-10.200.16.10:33010.service - OpenSSH per-connection server daemon (10.200.16.10:33010). Aug 5 22:26:33.961499 sshd[5866]: Accepted publickey for core from 10.200.16.10 port 33010 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:33.962644 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:33.967367 systemd-logind[1650]: New session 14 of user core. Aug 5 22:26:33.973604 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 5 22:26:34.443813 sshd[5866]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:34.447349 systemd[1]: sshd@11-10.200.4.17:22-10.200.16.10:33010.service: Deactivated successfully. Aug 5 22:26:34.449709 systemd[1]: session-14.scope: Deactivated successfully. Aug 5 22:26:34.451734 systemd-logind[1650]: Session 14 logged out. Waiting for processes to exit. Aug 5 22:26:34.453203 systemd-logind[1650]: Removed session 14. Aug 5 22:26:39.553747 systemd[1]: Started sshd@12-10.200.4.17:22-10.200.16.10:37248.service - OpenSSH per-connection server daemon (10.200.16.10:37248). Aug 5 22:26:40.144858 sshd[5884]: Accepted publickey for core from 10.200.16.10 port 37248 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:40.149223 sshd[5884]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:40.159155 systemd-logind[1650]: New session 15 of user core. Aug 5 22:26:40.162815 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 5 22:26:40.622041 sshd[5884]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:40.626082 systemd[1]: sshd@12-10.200.4.17:22-10.200.16.10:37248.service: Deactivated successfully. Aug 5 22:26:40.628317 systemd[1]: session-15.scope: Deactivated successfully. Aug 5 22:26:40.629615 systemd-logind[1650]: Session 15 logged out. Waiting for processes to exit. Aug 5 22:26:40.630796 systemd-logind[1650]: Removed session 15. Aug 5 22:26:45.731736 systemd[1]: Started sshd@13-10.200.4.17:22-10.200.16.10:37264.service - OpenSSH per-connection server daemon (10.200.16.10:37264). Aug 5 22:26:46.329926 sshd[5902]: Accepted publickey for core from 10.200.16.10 port 37264 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:46.331481 sshd[5902]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:46.335518 systemd-logind[1650]: New session 16 of user core. Aug 5 22:26:46.341634 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 5 22:26:46.869183 sshd[5902]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:46.872187 systemd[1]: sshd@13-10.200.4.17:22-10.200.16.10:37264.service: Deactivated successfully. Aug 5 22:26:46.874840 systemd[1]: session-16.scope: Deactivated successfully. Aug 5 22:26:46.877040 systemd-logind[1650]: Session 16 logged out. Waiting for processes to exit. Aug 5 22:26:46.878014 systemd-logind[1650]: Removed session 16. Aug 5 22:26:51.977167 systemd[1]: Started sshd@14-10.200.4.17:22-10.200.16.10:39108.service - OpenSSH per-connection server daemon (10.200.16.10:39108). Aug 5 22:26:52.573561 sshd[5936]: Accepted publickey for core from 10.200.16.10 port 39108 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:52.575087 sshd[5936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:52.579541 systemd-logind[1650]: New session 17 of user core. Aug 5 22:26:52.584627 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 5 22:26:53.052856 sshd[5936]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:53.056421 systemd[1]: sshd@14-10.200.4.17:22-10.200.16.10:39108.service: Deactivated successfully. Aug 5 22:26:53.058927 systemd[1]: session-17.scope: Deactivated successfully. Aug 5 22:26:53.060509 systemd-logind[1650]: Session 17 logged out. Waiting for processes to exit. Aug 5 22:26:53.061397 systemd-logind[1650]: Removed session 17. Aug 5 22:26:53.161090 systemd[1]: Started sshd@15-10.200.4.17:22-10.200.16.10:39116.service - OpenSSH per-connection server daemon (10.200.16.10:39116). Aug 5 22:26:53.751792 sshd[5972]: Accepted publickey for core from 10.200.16.10 port 39116 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:53.753335 sshd[5972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:53.758911 systemd-logind[1650]: New session 18 of user core. Aug 5 22:26:53.763680 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 5 22:26:54.249713 sshd[5972]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:54.254057 systemd[1]: sshd@15-10.200.4.17:22-10.200.16.10:39116.service: Deactivated successfully. Aug 5 22:26:54.257022 systemd[1]: session-18.scope: Deactivated successfully. Aug 5 22:26:54.258499 systemd-logind[1650]: Session 18 logged out. Waiting for processes to exit. Aug 5 22:26:54.259602 systemd-logind[1650]: Removed session 18. Aug 5 22:26:54.359742 systemd[1]: Started sshd@16-10.200.4.17:22-10.200.16.10:39118.service - OpenSSH per-connection server daemon (10.200.16.10:39118). Aug 5 22:26:54.946517 sshd[5988]: Accepted publickey for core from 10.200.16.10 port 39118 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:54.947097 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:54.952750 systemd-logind[1650]: New session 19 of user core. Aug 5 22:26:54.958624 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 5 22:26:56.309777 sshd[5988]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:56.314077 systemd-logind[1650]: Session 19 logged out. Waiting for processes to exit. Aug 5 22:26:56.314656 systemd[1]: sshd@16-10.200.4.17:22-10.200.16.10:39118.service: Deactivated successfully. Aug 5 22:26:56.317200 systemd[1]: session-19.scope: Deactivated successfully. Aug 5 22:26:56.318586 systemd-logind[1650]: Removed session 19. Aug 5 22:26:56.419755 systemd[1]: Started sshd@17-10.200.4.17:22-10.200.16.10:39130.service - OpenSSH per-connection server daemon (10.200.16.10:39130). Aug 5 22:26:57.007500 sshd[6006]: Accepted publickey for core from 10.200.16.10 port 39130 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:57.008953 sshd[6006]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:57.014212 systemd-logind[1650]: New session 20 of user core. Aug 5 22:26:57.021817 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 5 22:26:57.693568 sshd[6006]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:57.698516 systemd[1]: sshd@17-10.200.4.17:22-10.200.16.10:39130.service: Deactivated successfully. Aug 5 22:26:57.701329 systemd[1]: session-20.scope: Deactivated successfully. Aug 5 22:26:57.702302 systemd-logind[1650]: Session 20 logged out. Waiting for processes to exit. Aug 5 22:26:57.703266 systemd-logind[1650]: Removed session 20. Aug 5 22:26:57.815015 systemd[1]: Started sshd@18-10.200.4.17:22-10.200.16.10:39136.service - OpenSSH per-connection server daemon (10.200.16.10:39136). Aug 5 22:26:58.416738 sshd[6017]: Accepted publickey for core from 10.200.16.10 port 39136 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:26:58.417324 sshd[6017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:26:58.422445 systemd-logind[1650]: New session 21 of user core. Aug 5 22:26:58.431220 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 5 22:26:58.916550 sshd[6017]: pam_unix(sshd:session): session closed for user core Aug 5 22:26:58.920647 systemd-logind[1650]: Session 21 logged out. Waiting for processes to exit. Aug 5 22:26:58.921639 systemd[1]: sshd@18-10.200.4.17:22-10.200.16.10:39136.service: Deactivated successfully. Aug 5 22:26:58.923911 systemd[1]: session-21.scope: Deactivated successfully. Aug 5 22:26:58.925516 systemd-logind[1650]: Removed session 21. Aug 5 22:27:04.026777 systemd[1]: Started sshd@19-10.200.4.17:22-10.200.16.10:34282.service - OpenSSH per-connection server daemon (10.200.16.10:34282). Aug 5 22:27:04.615816 sshd[6038]: Accepted publickey for core from 10.200.16.10 port 34282 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:04.618007 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:04.629143 systemd-logind[1650]: New session 22 of user core. Aug 5 22:27:04.631696 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 5 22:27:05.088700 sshd[6038]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:05.092759 systemd[1]: sshd@19-10.200.4.17:22-10.200.16.10:34282.service: Deactivated successfully. Aug 5 22:27:05.095052 systemd[1]: session-22.scope: Deactivated successfully. Aug 5 22:27:05.095882 systemd-logind[1650]: Session 22 logged out. Waiting for processes to exit. Aug 5 22:27:05.097352 systemd-logind[1650]: Removed session 22. Aug 5 22:27:08.697192 systemd[1]: run-containerd-runc-k8s.io-6563f5582acc47a180e4204d9891a63216b678abdef1b15b457212a774602aec-runc.xcfSuo.mount: Deactivated successfully. Aug 5 22:27:10.260730 systemd[1]: Started sshd@20-10.200.4.17:22-10.200.16.10:38590.service - OpenSSH per-connection server daemon (10.200.16.10:38590). Aug 5 22:27:10.953019 sshd[6073]: Accepted publickey for core from 10.200.16.10 port 38590 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:10.954822 sshd[6073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:10.960093 systemd-logind[1650]: New session 23 of user core. Aug 5 22:27:10.963651 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 5 22:27:11.427495 sshd[6073]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:11.430383 systemd[1]: sshd@20-10.200.4.17:22-10.200.16.10:38590.service: Deactivated successfully. Aug 5 22:27:11.433281 systemd[1]: session-23.scope: Deactivated successfully. Aug 5 22:27:11.435169 systemd-logind[1650]: Session 23 logged out. Waiting for processes to exit. Aug 5 22:27:11.436333 systemd-logind[1650]: Removed session 23. Aug 5 22:27:16.554811 systemd[1]: Started sshd@21-10.200.4.17:22-10.200.16.10:38602.service - OpenSSH per-connection server daemon (10.200.16.10:38602). Aug 5 22:27:17.268673 sshd[6094]: Accepted publickey for core from 10.200.16.10 port 38602 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:17.270188 sshd[6094]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:17.275657 systemd-logind[1650]: New session 24 of user core. Aug 5 22:27:17.282624 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 5 22:27:17.883963 sshd[6094]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:17.888547 systemd[1]: sshd@21-10.200.4.17:22-10.200.16.10:38602.service: Deactivated successfully. Aug 5 22:27:17.890979 systemd[1]: session-24.scope: Deactivated successfully. Aug 5 22:27:17.892100 systemd-logind[1650]: Session 24 logged out. Waiting for processes to exit. Aug 5 22:27:17.893023 systemd-logind[1650]: Removed session 24. Aug 5 22:27:22.354140 systemd[1]: run-containerd-runc-k8s.io-4ff747a853f727bb84d702a66b0c52b928415e7c029fc40d6fdd1f81cd02b300-runc.wjbiSs.mount: Deactivated successfully. Aug 5 22:27:22.995418 systemd[1]: Started sshd@22-10.200.4.17:22-10.200.16.10:39578.service - OpenSSH per-connection server daemon (10.200.16.10:39578). Aug 5 22:27:23.593152 sshd[6150]: Accepted publickey for core from 10.200.16.10 port 39578 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:23.594405 sshd[6150]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:23.598805 systemd-logind[1650]: New session 25 of user core. Aug 5 22:27:23.603890 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 5 22:27:24.066733 sshd[6150]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:24.070665 systemd[1]: sshd@22-10.200.4.17:22-10.200.16.10:39578.service: Deactivated successfully. Aug 5 22:27:24.073063 systemd[1]: session-25.scope: Deactivated successfully. Aug 5 22:27:24.073846 systemd-logind[1650]: Session 25 logged out. Waiting for processes to exit. Aug 5 22:27:24.075401 systemd-logind[1650]: Removed session 25. Aug 5 22:27:29.177769 systemd[1]: Started sshd@23-10.200.4.17:22-10.200.16.10:51276.service - OpenSSH per-connection server daemon (10.200.16.10:51276). Aug 5 22:27:29.769070 sshd[6168]: Accepted publickey for core from 10.200.16.10 port 51276 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:29.769703 sshd[6168]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:29.774873 systemd-logind[1650]: New session 26 of user core. Aug 5 22:27:29.782611 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 5 22:27:30.253421 sshd[6168]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:30.258061 systemd[1]: sshd@23-10.200.4.17:22-10.200.16.10:51276.service: Deactivated successfully. Aug 5 22:27:30.260133 systemd[1]: session-26.scope: Deactivated successfully. Aug 5 22:27:30.261108 systemd-logind[1650]: Session 26 logged out. Waiting for processes to exit. Aug 5 22:27:30.262965 systemd-logind[1650]: Removed session 26. Aug 5 22:27:35.357057 systemd[1]: Started sshd@24-10.200.4.17:22-10.200.16.10:51288.service - OpenSSH per-connection server daemon (10.200.16.10:51288). Aug 5 22:27:35.947159 sshd[6183]: Accepted publickey for core from 10.200.16.10 port 51288 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:35.948721 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:35.953142 systemd-logind[1650]: New session 27 of user core. Aug 5 22:27:35.956883 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 5 22:27:36.424796 sshd[6183]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:36.428127 systemd[1]: sshd@24-10.200.4.17:22-10.200.16.10:51288.service: Deactivated successfully. Aug 5 22:27:36.430519 systemd[1]: session-27.scope: Deactivated successfully. Aug 5 22:27:36.432539 systemd-logind[1650]: Session 27 logged out. Waiting for processes to exit. Aug 5 22:27:36.433967 systemd-logind[1650]: Removed session 27. Aug 5 22:27:41.533266 systemd[1]: Started sshd@25-10.200.4.17:22-10.200.16.10:55374.service - OpenSSH per-connection server daemon (10.200.16.10:55374). Aug 5 22:27:42.136257 sshd[6211]: Accepted publickey for core from 10.200.16.10 port 55374 ssh2: RSA SHA256:adX111JmHbau/CysBZ5LDoDZKZJaK5lBLbJS9aqawPE Aug 5 22:27:42.138023 sshd[6211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Aug 5 22:27:42.143077 systemd-logind[1650]: New session 28 of user core. Aug 5 22:27:42.148635 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 5 22:27:42.639828 sshd[6211]: pam_unix(sshd:session): session closed for user core Aug 5 22:27:42.643146 systemd[1]: sshd@25-10.200.4.17:22-10.200.16.10:55374.service: Deactivated successfully. Aug 5 22:27:42.645871 systemd[1]: session-28.scope: Deactivated successfully. Aug 5 22:27:42.648403 systemd-logind[1650]: Session 28 logged out. Waiting for processes to exit. Aug 5 22:27:42.649324 systemd-logind[1650]: Removed session 28.